They should with what they charge and the churn rate, but my experience then was the same as yours, I left after a semester because I felt university was almost a scam for many vocations.You can learn most of it yourself, then begin to gain experience and if you're good you don't need that paper. Only one of my friends actually did what they went to uni for, most were on average shitty jobs while I was doing okay going the experience route. Eng/Dr etc yeah uni is for you. The rest, especially 'business studies' or whatever they call it.. if you need to do a course to be interested in business you are not cut out for it.LOL when I was in college they had us writing C code in notepad and compiling via gcc in a command line terminal. This was back in 2003-2004, and college budgets have only gotten worse. Do you honestly think college campuses are readily stocked with computers running $5-10k professional GPUs?
I am really hoping that we see a price decrease soon or Independent reviews show a reason to buy Radeon VII over RTX 2080. In the interview he kept saying that he's glad to see new games taking advantage of all Radeon VII "new technology", but there is no new technology here.
No, they didn’t fall for it. That’s the point. He just made himself look like a petulant child to everyone.
So I was looking at RTX 2080s and the ones that I like are all over 800 lol. RTX isn't worth spending money on until its mainstream and DLSS I refuse to use, I want real resolution and I will never upscale. One can sugarcoat it all they like. DLSS is upscaling in a nutshell and I am not really willing to sacrifice any quality what so ever. It seems like Radeon VII ends up being 150 or so cheaper for a triple fan cooler than an rtx 2080.
On top of the price I am terrified of the space invaders lol
Thanks for the interview Kyle.
I like that it has 16Gb vram. It's good that they have a card that has performance around a 2080, but has more ram. If I was in the market for a card in that price range or performance, the 16Gb would be very compelling. Not yet convinced that HBM2 is actually faster the GDDRx, if it was I think we would see it in more graphics cards by now.
A few questions about how the memory is accessed in the paragraph below, perhaps [H] can submit this to AMD/nVidia for clarification?
Questions: They give us a bandwidth spec but that is all of the chips being accessed at once, isn't it? (For both GDDRx and HBM) 4096bits wide memory interface for HBM2 (sounds amazing!), but a texture in a game is maybe what, 32bits wide? It would likely download out of a single HBM2 chip, which has (I think) 256GBps per package, or 32Gbps (in bytes). Doesn't sound nearly as fast as 1TBps (which I believe is in bits, not bytes. Converting to bytes /8 = 128Gbps that is obviously a score counting multiple packages). This same question could be put to GDDR flavors as well. A 1080ti has 11Gb GDDR5X, rated at 11Gbps. It already sounds slower than HBM2, even a single chip (If I understood the specs correctly from wikipedia). But what about individual chip speed/thruput, but also how are either of these technologies utilized in how they store individual textures? Is it spread out across all of the chips, or would textures be complete items in individual chips? If the latter was the case, then individual chip thruput would be a more important spec than the "Total bandwidth", wouldn't it? If the items in memory were spread out across all of the chips, then a total bandwidth measurement seems like it would be most useful. The answer to those questions is something I haven't found asked or answered anywhere. We just listen to the marketing saying "Bajillions of GBs!", or how wide the memory bus is.
Hopefully the above questions and reasons for asking them have been articulated well enough to get some kind of answer from someone. Reading specs on wikipedia regarding HBM2 and GDDR5X, the above questions weren't answered.
Competition is good, can't wait for the review on this one. [H], please add a 1080ti in the review, since it has more vram than a 2080 has and about the same performance.
I second this. 4K and VR is the target market for a 16GB card.Might be an edge case scenario but, if you can swing it, could you do some VR testing as well? Maybe just general impression type testing? (nothing major if it impacts normal testing schedule)
I'm looking to replace a 1080Ti and a Pascal Titan to get all nVidia kit out of my builds and this may be the card that allows me to do that.
The texture is going to be some size, probably several megabytes, and all of that data will be striped across all the memory channels just like RAID 0. It's a single interface of x bits wide.
Thats 1TB/sec is in bytes, not bits, yes an entire terabyte per second. Sounds insane, because it honestly is.
Faster better memory and superior Vulkan performance. Seems better then slow Ray tracing and supposed DLSS support. Also if a title supports Rapid Pack Math it does speed it up quite a bit on AMD. You also have to consider the large failure rate so unless these radeon cards start dying after they release, then that alone would drive me to buy the radeon over the 2080 at the same price. Just a matter of what is important to you.
DLSS is looking to be an even bigger marketing scam than RTX.
Just don't bash it if you plan on doing an Nvidia review
I am going to guess the RVII will be an excellent VR cardWhat i would like to know is anyone here going to get one of these. If Kyle gets one and in his review maybe tries out VR to see how it goes. I have been thinking of getting an HYC Vive and i know my RX480 won't cut the mustard. I can afford one of these cards, but it would have to be able to handle VR gaming. From what i have read Vega 64 also isn't myth chop in VR. So I'm interested to see how the Radeon 7 goes.
I really hope Nvidia is stupid enough to put this practice into writing. Like really really hope they are dumb enough to send a memo to NDA signees and remind them that praise be to the RTX or no more samples.This is already true. Read techpowerup articles at times. I think last few Radeon 7 news they were already throwing jabs at it before even reviewing it. Like lousy insults. I have already seen a few people call them out. They are going out of their way to preach to Nvidia. Must be tough lol! Very few tech journalists with balls. I am so glad [H] didn't sign the NDA.
What i would like to know is anyone here going to get one of these. If Kyle gets one and in his review maybe tries out VR to see how it goes. I have been thinking of getting an HYC Vive and i know my RX480 won't cut the mustard. I can afford one of these cards, but it would have to be able to handle VR gaming. From what i have read Vega 64 also isn't myth chop in VR. So I'm interested to see how the Radeon 7 goes.
I missed this interview earlier and just caught up.
I hope they are able to get someone to make a displayport 1.4 to hdmi 2.1 adapter, would go a long way to boosting these cards. But it mainly matters if they are able to get vrr and perhaps even the low latency mode working through the adapter. And what kind of latency is introduced just by using an adapter in the first place? Trivial or something people might notice?
It was also interesting reading an answer that actually gave the die size of 7nm Vega 2. 331mm - nice midrange gpu size. For comparison:
2060/2070 - 445mm
2080 - 545mm
2080ti - 754mm
These are all really large gpu dies. The 2080ti is insanely large to me, and I think it might be the single largest consumer die gpu ever. But imagine keeping the same die size and scaling up that much more horsepower on 7nm of their own? The performance would be insane even on the same architecture.
Whatever comes after Turing/Vega/Navi has the potential to cut through 4k content with ease, so I hope they start expanding into more new effects on the AMD side in the future like more potent ray tracing.
Even if nvidias implementation is not perfect and dependent on dlss working, their efforts have made it clear that it's something I'd like to see in future cards and added to games for better effects.
I am interested to see how Navi does, but I'm worried it won't target higher tier performance until much later. What if navi is only a 200mm card? Even if the architecture improvements are there, maybe it might be too small to overcome the brute force of larger die size cards. If navi was the same die size or larger than Vega 2 (which again, at only 331mm is not that big) then it has the potential to go after the 2080ti or higher.
Am I the only one wondering about the 331mm2 a@ 300w tdp @ 7nm?
For comparison the 2080 its performance is compared to is 545 mm2 @ 215W tdp @ 12nm.
I get that we are now in a phase where all companies calculate tdp differently. Is it the hbm2 w a 4096 memory interface?
I'm a layman but remeber that same node shrink I think it was nvidias Maxwell that got a big performance boost significantly because they started using internal hardware to compress the picture before processing? Don't think AMD ever copied that tech. Think it's like a 75% image compression that so far no one has complained about. I'll see if I can find a link.
Edit: add link below see under color compression. I've just never heard about the AMD iteration.
Edit2: nvm I guess they did add a delta color compression with gcn 1.2 around the r9 280. With hbm bandwidth is not so constrained but I am always a fan of greater efficiency. The with 4k gaming and Moores law limits these kind of technologies will become very important.
Hah, oh boy... Gibbo at OCUK has confirmed only 100 parts for ALL of the UK.
That only 5000 rumors sounds like it may be vastly optimistic.