AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

There is no difference at all. Its just a different name :)

HBM2 will basically double the bandwidth offered by HBM1 – which is quite an impressive feat considering that HBM1 is already around 4 times faster than GDDR5. Not only that but power consumption will be reduced by another 8% – once again over an existing reduction of 48% over GDDR5 (of HBM1). But perhaps one of the most significant developments is that it will allow GPU manufacturers to seamlessly scale vRAM from 2GB to 32GB – which covers pretty much all the bases. As our readers are no doubt aware, HBM is 2.5D stacked DRAM (on an inter-poser). This means that the punch offered by any HBM memory is directly related to its stack (layers).

The impact of memory bandwidth on GPU performance has been under-rated in the past – something that has finally started changing with the advent of High Bandwidth Memory. Where HBM1 could go as high as a 4-Hi stack (4 layers), HBM2 can go up to 8-Hi (8 layers). The 4-Hi HBM stack present on AMD Fury series is basically a combination of 4x 4-Hi stacks – each contributing 1GB to the 4GB grand total. In comparison, HBM2’s 4-Hi stack will offer 4GB on a single stack – so the Fury X combination repeated with HBM2 would actually net 16GB HBM2 with 1TB/s bandwidth. Needless to say, this is a very nice number, both in terms of real estate utilization and raw bandwidth offered by the medium.
Of course, HBM2 is only as good as the graphic cards its featured in. As far as use-case confirmations go, Nvidia at-least, speaking at the Japanese version of the GTC confirmed that it will be utilizing HBM2 technology in its upcoming Pascal GPUs. Interestingly however, the amount of vRAM revealed was 16GB at 1 TB/s and not 32 GB. The 1 TB/s number shows that Nvidia is going to be using 4 stacks of HBM – and the amount of vRAM tells us that its going to be 4-Hi HBM2. They did mention however, that as the memory standard matures they might eventually start rolling out 32GB HBM2 graphic cards. This is something that isn’t really surprising considering 8-Hi HBM would almost certainly have more complications than 4-Hi HBM in terms of yield

P100 is 1.6Ghz HBM2 at 732GB/sec. Not 1TB/sec.

And as you mention, 16GB. Another problem for HBM is the density.

HBM1/HBM2 isn't as effective as you think after GDDR5X/GDDR6 hit. And HBM2 didn't improve much over HBM1. I know you just plain copy Hynix PR numbers. But you can ask yourself why you find no Hynix slides comparing to GDDR5X or GDDR6. The sole fact that Nvidia made GP102 with GDDR5X instead of HBM2 says it all from cost, performance and power metrics. And then there is Samsung, a HBM2 maker way ahead of Hynix by 6-9 months, working hard on GDDR6.
NVIDIA-HBM-Memory-Crisis.jpg
 
Last edited:
Double the bandwidth of HBM. Lower latency and power consumption. According to you is marginally faster...lol. All this money was sunk into its development for fun.
The fact that it is not on NVidia consumer cards is its biggest failure... lol :D
 
Double the bandwidth of HBM. Lower latency and power consumption. According to you is marginally faster...lol. All this money was sunk into its development for fun.
The fact that it is not on NVidia consumer cards is its biggest failure... lol :D

You forget GDDR5X/GDDR6.

Failure is a funny word considering what the failure cards have been. :D

Fiji lost to GM200, Vega 10 may lose to GP104 and will lose to GP102.

GM200 outsold Fiji with something like 10 to 1 or more.

Going from GDDR5X or HBM1 to HBM2 is like replacing your 2133Mhz DDR4 RAM with 4000Mhz RAM.

But it also increases your power consumption quite drastically and then we dont even have to talk about the density issue or cost issue. The problem is we pretty much already need HBM3(2020+ product) or a whole new memory standard for graphics.
 
Last edited:
Double the bandwidth of HBM. Lower latency and power consumption. According to you is marginally faster...lol. All this money was sunk into its development for fun.
The fact that it is not on NVidia consumer cards is its biggest failure... lol :D


Why is it a failure when they have been outselling AMD 3 to 1? Costs less for them too.

Seems to me, nV made the smart move, and got both ends of the stick, lower costs (retained margins) able to launch cards in their regular 1.5 year cycle (this improved their marketshare), while AMD spent R&D on HBM where they should have put that money into their GPU architectures and end results, higher costs (lower margins) and delayed in release of cards (drop in marketshare)

HBM2 has more bandwidth that GDDR5x yes, but nV doesn't need as much bandwidth as AMD chips, their architecture eats up less bandwidth, about 25% less, because of their updated delta correction. R&D money spent well, not on another company's product (HBM) which now AMD is tied to Hynix for supply because of the exclusivity clause. AMD chips don't have that yet, not sure if Vega has it, because they haven't mentioned it, I'm guessing its the same as Polaris, if that is the case a similar performance AMD chip to nV chip, AMD will need ~25% (give or take 5%) more bandwidth
 
Last edited:
In terms of bandwidth. Vega 10 will have ~410-512GB/sec depending if they get 2Ghz HBM or not. GP102 already fields 480GB/sec of bandwidth with GDDR5X. And they can easily use GDDR5X that's 20% faster by now for 576GB/sec.
 
right so if Vega is performing like a gtx 1080 it will need ~400 GB/sec bandwidth for that kind of performance.
 
I need to find those Kleenex coupons for you guys, make sure you have a few boxes around once Vega hits and exceeds your expectations. :cry:
 
I need to find those Kleenex coupons for you guys, make sure you have a few boxes around once Vega hits and exceeds your expectations. :cry:


If it does great, but I can't believe it till I see it, cause AMD hasn't shown it yet. Even what they have shown isn't impressive, its just the same performance as a gtx1080. Everything they have talked about, are going to be features that are specific through their own extensions so having 25% of the marketshare, I don't see dev's get overly excited about that since the main performance and power usage feature (titled rasterization) is automated in nV's products. Console developers will not get their hands on Vega based APU till end of this year, so even that won't help push Vega.

These are things AMD is showing and saying, nothing is made up, so there is no speculation on it right now. The only way we have missed anything, is if AMD is sandbagging, which is very hard to believe at this juncture.

Added to all this, without primitive shaders, I don't see much difference in Vega over Polaris from its geometry through put! What does that mean? I think there was very little things done to improve Vega's through put vs Polaris. Now this part is part speculation and part what we have read in different articles about how the new geometry engines can get up to 11 tris per clock culling by the use of primitive shaders. They introduced an entirely new pipleine stage, which at this point might be be just something they tweaked with their shader array and how it functions (hence the NCU). Cause nV's hidden surface removal, and culling, is automatic, something AMD couldn't get done with the time line they had with Vega. And its painfully obvious the tiled rasterizer on Vega, is limited to specific code.

AMD is all about making things easier for developers right? Doesn't look like it right lol, but they needed these features to remain competitive with nV so they got them in, but the good thing is they are under developer control, so it may be possible they are more flexible than nV's, just don't know at this point.

From what they have shown so far and the limitations of the new features on Vega, it looks to be Vega is very similar to Polaris, with the same limitation, when not factoring in primitive shaders and HBM at this point.
 
Last edited:
I need to find those Kleenex coupons for you guys, make sure you have a few boxes around once Vega hits and exceeds your expectations. :cry:

Honestly, the only AMD product that has a chance of doing so is Ryzen, if Vega was anything better than a 1080GTX, AMD would be shouting from the roof tops by now. Its like showing up to Christmas dinner with a fat chick, when the pretty girl down the street had a crush on you the whole time...
 
The moving force on the software side may well be Microsoft with the next XBox, meaning developers may have the tools sooner then you think as in Software using Vega as a base. Of course time delay from that point for brewing the brew. Now will Intel adapt some of the new tech AMD is releasing with Vega? FreeSync 2 etc.? That will need to be seen - they may indeed do just that.

Now if the engineering sample shown was the final silicon, it may not be, with the fastest HBM memory (might not be), configured for fastest speed (probably not) with best drivers for the GPU/Card (probably not) - Meaning we do not know a hell a lot at this point. Conjecture and piecing together tidbits is plain foolish. Doesn't mean someone doesn't haphazardly guess right and feels all good about themselves while looking into the mirror thinking they really knew. At this stage, I would haphazardly guess Vega shown was around 1080 performance - so if folks want to think after 5 months it will still be 1080 performance with no improvement - well think however you want.

At this time RyZen appears to be way more important for AMD to get launched and sold. Which also means making sure Nvidia cards kick ass and works well, hence you see Nvidia cards being used successfully in RyZen setups. It would be a big mistake if AMD only shown AMD cards in RyZen rigs. AMD also needs to support Nvidia and even their success to make RyZen a success. In other words AMD is also a partner working with Nvidia - some folks forget or just do not see this. I always thought when AMD joined their CPU and GPU divisions together that that was a huge mistake and has since corrected that by forming RTG. So in essence RTG competes with Nvidia but AMD cpu side works with Nvidia.
 
-It is well known that DOOM's "Vulcan" API is the only API so far that clearly favours AMD's GPUs. (*DX12 benchmarks is game-depended so far)
-So the fact that AMD chose to demonstrate VEGA using the only API that clearly favours them, combining with the fact that its performance was between 1080GTX & TitanX Pascal, is discouraging news for AMD in my opinion.
Their new and so long delayed GPU, manages to surpass 1080GTX at the only API that favours them!!
( Not something i would praise about, but on the other hand, creating hype for no reason is something that doesn't surprise me considering it's standard AMD's policy for long time now .....:rolleyes: )

Today i decided to make a little more research , regarding my previous-quoted thoughts . Based on the new review for Gigabyte 1060GTX G1 ( http://www.hardocp.com/article/2017...e_gtx_1060_g1_gaming_6g_review/1#.WHyTi3196Uk ), i concluded to the following numbers:
1) The difference between GIGABYTE GTX1060 ( OC'd ) Vs ASUS RX480 ( OC'd) , for all the other games besides DOOM (*VULCAN API ) is an average of + 6,96 % in favour for GTX1060
2) For DOOM (*VULCAN API) the outcome is +17,7 % in favour for RX480 .
-So, from 1) & 2) we see a massive advantage for AMD when using VULCAN ( from -6,96% goes to +17,7% !! , so it's about ... +25% gain for AMD when using VULCAN compared to all other games !!! )
-That's why i've said in my previous-quoted post that using DOOM/VULCAN in order for AMD to demonstrate their new VEGA GPUs, for me it's nothing more than marketing-hype from AMD, thus, doesn't mean a lot to me.
 
Today i decided to make a little more research , regarding my previous-quoted thoughts . Based on the new review for Gigabyte 1060GTX G1 ( http://www.hardocp.com/article/2017...e_gtx_1060_g1_gaming_6g_review/1#.WHyTi3196Uk ), i concluded to the following numbers:
1) The difference between GIGABYTE GTX1060 ( OC'd ) Vs ASUS RX480 ( OC'd) , for all the other games besides DOOM (*VULCAN API ) is an average of + 6,96 % in favour for GTX1060
2) For DOOM (*VULCAN API) the outcome is +17,7 % in favour for RX480 .
-So, from 1) & 2) we see a massive advantage for AMD when using VULCAN ( from -6,96% goes to +17,7% !! , so it's about ... +25% gain for AMD when using VULCAN compared to all other games !!! )
-That's why i've said in my previous-quoted post that using DOOM/VULCAN in order for AMD to demonstrate their new VEGA GPUs, for me it's nothing more than marketing-hype from AMD, thus, doesn't mean a lot to me.
So you know that Vega will have that same relationship? Maybe better then what Polaris shows.
Also do you know if that was a fully enabled GPU?
Also do you know if the drivers being used is using the new variable Wavefront capability of the SIMDs? If it was basically a Fiji driver I doubt it. Tell me how much you can get out of that 10% - 20% more? Less?
Bottom line is we don't know crap in the end and AMD probably doesn't know either the final outcome. At that stage, it is roughly 1080 performance with ~6 months to go (demo's started in December with Doom). Will AMD hit the ~1.5ghz needed for 12.5 TFLOPs? Or more?

Personally I would be much more interested in DX 12 and Vulkan performance, I am sure it will do older DX 11 games well. Wait, VR also would be top of list. If it performs well then they get a buy, if not they won't. I can wait for the real performance once AMD can get it out the door. I pretty much believe it will blow the 1080 out of the water but then I havn't a clue either if it will :LOL:

Also if I was AMD I would want Nvidia to release the 1080Ti sooner with reduced capacity thinking it will take Vega and then come out with Vega exceeding that performance to maximize the price I can sell Vega to maximize profits. If AMD let Nvidia know Vega can kick some serious ass, Nvidia would start working on a better solution making it harder for AMD in the end. Nvidia has many options to compete, Titan XP is ~40% > then the 1080 and it is not even a fully enabled gpu! Then again AMD maybe putting best case forward to keep stock prices up (what good will that do if they don't really deliver is the case against that argument).

So AMD did a dog and pony show and really we don't know for certain the real results nor actually when Vega will be launched.
 
Last edited:
You forget GDDR5X/GDDR6.

Failure is a funny word considering what the failure cards have been. :D

Fiji lost to GM200, Vega 10 may lose to GP104 and will lose to GP102.

GM200 outsold Fiji with something like 10 to 1 or more.



But it also increases your power consumption quite drastically and then we dont even have to talk about the density issue or cost issue. The problem is we pretty much already need HBM3(2020+ product) or a whole new memory standard for graphics.

GDDR5X, as I understood it, was a stop-gap solution between GDDR5 and HBM2. If I remember correctly both AMD and nVidia had initially planned to have their flagship 14nm/16nm cards out in 2016 both using HBM2. Issues with the foundries and the delay of HBM2 caused them both to revamp their roadmaps.

Also, to the person arguing/defending this as GDDR5x+nVidia vs. HBMX+AMD is looking at this the wrong way. AMD and nVidia will both be using GDDR5X and HBM2 in future releases. As far as AMD goes, GDDR5x didn't make much sense for Polaris but we will absolutely see Vega models using it. HBM also has other incredibly useful applications outside of discreet GPU's and isn't exactly a direct competitor to GDDR in a sense, it just happens to also be able to work for that application.
 
So you know that Vega will have that same relationship? Maybe better then what Polaris shows.
Also do you know if that was a fully enabled GPU?
............................................
............................................
So AMD did a dog and pony show and really we don't know for certain the real results nor actually when Vega will be launched.

Check my previous comment #178. This is my answer to your comments.;)
( https://hardforum.com/threads/amd-v...ultra-settings.1921718/page-5#post-1042744384 )
 
I will just wait until dinner is done before I sample it is the jest of my previous comment. There can be virtually endless if's, and's, previously etc. Then I guess it makes for some dull chit chatter if everyone just stops speculating :).
 
GDDR5X, as I understood it, was a stop-gap solution between GDDR5 and HBM2. If I remember correctly both AMD and nVidia had initially planned to have their flagship 14nm/16nm cards out in 2016 both using HBM2. Issues with the foundries and the delay of HBM2 caused them both to revamp their roadmaps.

Also, to the person arguing/defending this as GDDR5x+nVidia vs. HBMX+AMD is looking at this the wrong way. AMD and nVidia will both be using GDDR5X and HBM2 in future releases. As far as AMD goes, GDDR5x didn't make much sense for Polaris but we will absolutely see Vega models using it. HBM also has other incredibly useful applications outside of discreet GPU's and isn't exactly a direct competitor to GDDR in a sense, it just happens to also be able to work for that application.

There isn't any delay with HBM2. Its been shipped in products for over half a year now.

You see it from a wrong perspective, its all about cost vs benefit. AMD doesn't have a GDDR5X controller since they rely completely on 3rd party memory controller solutions. Hopefully they will be ready when GDDR6 hits.
 
There isn't any delay with HBM2. Its been shipped in products for over half a year now.

You see it from a wrong perspective, its all about cost vs benefit. AMD doesn't have a GDDR5X controller since they rely completely on 3rd party memory controller solutions. Hopefully they will be ready when GDDR6 hits.

1. Hmmmm, I'll take your word for it I guess as I don't feel like looking it up atm but I could of sworn I read a few different headlines/articles last year about HBM (specifically HBM2) manufacturing or supply shortages from SKH and Samsung.

2. As far as GDDR5x goes, they could simply source a compatible controller if necessary. That doesn't make much sense to me as they source all kinds of parts from many different vendors all the time, if they wanted to use GDDR5x they absolutely would of. It didn't fit within the framework of what Polaris was trying to achieve or the portion of the market it was targeting. Raja himself has mentioned GDDR5x in addition to HBM in various interviews last year. In all my years of building PC's and following hardware I have never heard of any big player NOT being able to source parts as you said unless it's related to a much larger/global issue.

I'm almost positive Polaris' memory controller can use GDDR5x if they so desired.

Vega is going to have both GDDR5x and HBM2 parts.


EDIT: http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps

"The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6)."

Based on that it seems GDDR5X is backwards compatible with existing memory subsystems. Which is what I always assumed but never bothered to look into. There's some retooling but I wouldn't think it would require an entirely new design.
 
Last edited:
I'm almost positive Polaris' memory controller can use GDDR5x if they so desired.

Vega is going to have both GDDR5x and HBM2 parts.

Could you link both of these?

GDDR5X isn't much different than GDDR5 as such. But it still requires a new memory controller. I doubt you see it with AMD products before GDDR6.
 
There isn't any delay with HBM2. Its been shipped in products for over half a year now.

You see it from a wrong perspective, its all about cost vs benefit. AMD doesn't have a GDDR5X controller since they rely completely on 3rd party memory controller solutions. Hopefully they will be ready when GDDR6 hits.

Lol. You really sent me down a rabbit hole here. I've been up since like 6AM yesterday and am now googling for more info on Polaris' memory controller. :)
 
Could you link both of these?

GDDR5X isn't much different than GDDR5 as such. But it still requires a new memory controller. I doubt you see it with AMD products before GDDR6.

I will have to spend a little time looking for the right ones but sure.

Based on the Anand article I linked 5 and 5X aren't pin compatible but aside from that it sounds like 90% or more of the framework is the same as GDDR5. So there would need to be a stepping change/respin but nothing at all like going from GDDR3 -> 5 or even GDDR -> HBM.

Also, the IMC is built into the die and wouldn't be sourced from another company.

Glad you brought this up though. Been sitting at the Hospital with my Grandma and it's given me something to do while she's asleep.
 
Also, the IMC is built into the die and wouldn't be sourced from another company

AMD licenses 3rd party memory controllers. They tend to use Synopsys for it. Same applies for its CPUs.
http://news.synopsys.com/AMD-and-Synopsys-Expand-IP-Partnership

Glad you brought this up though. Been sitting at the Hospital with my Grandma and it's given me something to do while she's asleep.

Best of hopes and wishes to your grandma from me.
 
AMD licenses 3rd party memory controllers. They tend to use Synopsys for it. Same applies for its CPUs.
http://news.synopsys.com/AMD-and-Synopsys-Expand-IP-Partnership



Best of hopes and wishes to your grandma from me.

That's interesting, I never knew that.

I always assumed once we started getting IMC's moved on-die/chip that they would all be in-house designs. I know AMD outsources other controllers, components for their chipsets and GPU PCB's but never expected their IMC to be one of them. It never occurred to me that it was even a possibility.

I appreciate the kind words buddy, she's 92 and unfortunately we're moving her into a assisted living facility this week. Been battling Alzheimer's the past year or so and it's been getting exponentially worse as of late.
 
Ugh, Synopsys. We just had an insane few weeks at work chasing a tapeout due to Synopsys screwing up an MTP IP being used in a block.
 
If it does great, but I can't believe it till I see it, cause AMD hasn't shown it yet. Even what they have shown isn't impressive, its just the same performance as a gtx1080. Everything they have talked about, are going to be features that are specific through their own extensions so having 25% of the marketshare, I don't see dev's get overly excited about that since the main performance and power usage feature (titled rasterization) is automated in nV's products. Console developers will not get their hands on Vega based APU till end of this year, so even that won't help push Vega.

These are things AMD is showing and saying, nothing is made up, so there is no speculation on it right now. The only way we have missed anything, is if AMD is sandbagging, which is very hard to believe at this juncture.

Added to all this, without primitive shaders, I don't see much difference in Vega over Polaris from its geometry through put! What does that mean? I think there was very little things done to improve Vega's through put vs Polaris. Now this part is part speculation and part what we have read in different articles about how the new geometry engines can get up to 11 tris per clock culling by the use of primitive shaders. They introduced an entirely new pipleine stage, which at this point might be be just something they tweaked with their shader array and how it functions (hence the NCU). Cause nV's hidden surface removal, and culling, is automatic, something AMD couldn't get done with the time line they had with Vega. And its painfully obvious the tiled rasterizer on Vega, is limited to specific code.

AMD is all about making things easier for developers right? Doesn't look like it right lol, but they needed these features to remain competitive with nV so they got them in, but the good thing is they are under developer control, so it may be possible they are more flexible than nV's, just don't know at this point.

From what they have shown so far and the limitations of the new features on Vega, it looks to be Vega is very similar to Polaris, with the same limitation, when not factoring in primitive shaders and HBM at this point.

I think you have too much speculation going on at this point. When you read on paper Vega looks nothing like Polaris. Polaris looks like old gcn with few tweaks. While Vega looks like it's actually totally different. I guess we will find out but it doesn't look anything like Polaris.
 
I think you have too much speculation going on at this point. When you read on paper Vega looks nothing like Polaris. Polaris looks like old gcn with few tweaks. While Vega looks like it's actually totally different. I guess we will find out but it doesn't look anything like Polaris.


I haven't seen anything on Vega that shows its anything more than a bigger Polaris with the addition of primitive shaders.

I get this by the performance they have shown so far, their talks about how their new triangle discard works along with how their tile rendered works.

Even the TF's which are based on boost clocks of 1500 mhz, fits into that! Going by the way AMD has stated <300 watts, I am going to predict Vega is going to use around 250 watts, or more. If we were to use Polaris as an example for power consumption, we really should say its around 300 watts. But I don't think they will do that again.

I'm not speculating on what I'm saying, I'm saying what I'm seeing.

The only thing I would be guessing is AMD showing us the best they have or not, and going by what they have done in the past, it is the best they can show.
 
We all know the quality of the drivers for ati. They have a long, repeatable history of delivering crap drivers. That's not going to change with Vega. I just hope it beats the 1080 to get some real competition both in performance and price. Nvidia needs to be taken down a peg and get competitive again.
That's a bunk opinion that has been flouted at least the last 2 years, if not more. That's a talking point from circa 2005, and you know it's b.s.
 
  • Like
Reactions: N4CR
like this
It would be disappointing if max die ~ 1080 perf.

I guess it comes down to price/perf though. That's the real test.
 
That's a bunk opinion that has been flouted at least the last 2 years, if not more. That's a talking point from circa 2005, and you know it's b.s.
Its exactly the reason I had to sell my 290x card.
 
Its exactly the reason I had to sell my 290x card.

A little older, but AMD's Crossfire 'solution' circa HD69xx was awful. They've certainly improved, but Nvidia had multi-GPU down to an art at the time, and painless upgrades and performance have kept me there. Also G-Sync, but I'll re-evaluate with Freesync 2.0, assuming AMD has the silicon to back it up and 4k >60Hz >27" panels in monitors supporting the technology exist...
 
Last edited:
A little older, but AMD's Crossfire 'solution' circa HD69xx was awful. They've certainly improved, but Nvidia had multi-GPU down to an art at the time, and painless upgrades and performance have kept me there. Also G-Sync, but I'll re-evaluate with Freesync 2.0, assuming AMD has the silicon to back it up and 4k >60Hz >27" panels in monitors supporting the technology exist...

Yep, my earlier 5770 crossfire was worthless except for 1 game, Dirt 2.
 
Explain how awful. I had a 5970 and numerous people I know have run crossfire ever since its inception. Nothing, but praise. Seems like it never works for any of the NVidia funboys. :)
 
It either stuttered badly or there was no support.
Only Dirt 2 worked every time and didnt stutter.
 
Stuttering comes from too aggressive Row Refresh Cycle Time (tRFC) in your system memory settings. Has nothing to do with crossfire.
 
It doesnt matter what you have tested, bs flies right past.
 
I think its more along the lines of this simple setting prevented you from enjoying crossfire only to find out years later that your shit was not configured correctly ...lol
 
Back
Top