Separate names with a comma.
Discussion in 'HardForum Tech News' started by cageymaru, Aug 13, 2018.
It sounds funny when you enable the audio on all of the stream links posted and quoted all at once.
One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.
Not sure if they can combine more than 2 cards.
The NASDAQ press release has some more info:
I assume the RTX5000 with half the RAM is what is going to be the 2080 announced next week.
If the rumors are true and a Titan also gets revealed then it's probably going to be a crippled RTX6000.
Pretty close. It's hard to guess how well things will scale over NVLink (it's much faster than PCIe but much slower than HBM), but I think the island dataset will encourage new research in memory-scalable algorithms - my impression was that the research community was pretty unaware of the size of production data before the island scene was released.
As more of the presentation unfolds it seems pretty clear that they are targeting real-time design and visualization, where the data are much smaller and being able to physically simulate lighting is useful. Those cars and buildings can't be more than a few GB in size, so they would even fit on a smaller version of the Quadro (48GB of GDDR6 is not cheap!)
Not sure. I don't think it will have all the 4608 cores. That will be reserved for the Ti. Likely little over 3000 core for 2080 and and full core or little less for the Ti
I don't think we are disagreeing since the RTX5000 is listed as 3072 CUDA cores in that press release.
GeForce is going to be whatever cut down stuff is left, after they make the Quadros.
Bet he wishes he had a CPU to talk about
and none of the imbedded streams work...
He doesn't need to compete with CPUs. Whatever the server is running (Intel, AMD, whatever) it'll have a NVIDIA GPU plugged in running Tensor and RTX cores. No need for competition, this is NVIDIA, remember?
MY EYES, THE REFLECTION, GAHHH
Doesn't nvidia normally wait until a few months after releasing quadros to release geforce? Might be different this time.
Maybe we get the leftover volta's now that they have turing for the proffesionals.
Can’t wait till these new puppers get out in the wild.
I wonder if he was feeling alright... I didn't hear a single "the more you buy, the more you save"...
Turing looks like it is just an evolution of volta. Still having tensor cores. We will see, apparently nvidia is teasing gaming GPUs for gamescon. WCCFTECH has an article them releasing a teaser picture.
So it’s 10K , 6.3K, and 2.3k for RTX8000, 6000, 5000 respectively. Volta cost 8K.
They announced a server with eight of the high end RTX 8000 cards. That'd be 384 GB of memory.
I'm not even sure that that is a really limit either as they have nvSwitch (PDF) to gain even more GPUs together. So even if each GPU only has one nvLink bus, nvSwitch would permit up to 16 GPUs.
For reference, GV100 has six nvLink buses and combine with the nvSwitch, a 90 GPU system could be built with each other GPU only one single hop through an nvSwitch away from each other. That isn't the true limit either as each nvSwitch can also communicate with an other nvSwitch but poor bandwidth and latency between nodes wouldn't make it worth trying.
I thought Quadro GV100 was $8999.
The price is reasonable , assuming it can really bring something to the table with that DirectX Raytracing unit. That seems tom be Nvidia' s only hope for future consumer growth, now that games already look pretty impressive on top-end Pascal released 1.5 years ago.
I would hope that the gaming versions of these (October) will pack maybe half the tensor/RT performance, along with a more reasonable die size/price. I imagine they may make RT acceleration a top-end only feature only for Turing GT102, and POSSIBLY GT104 consumer GPUs. Everything below that would just be a check-box wasting die space.
But with this it's pretty clear where both Nvidia's professional and their consumer lines are headed, once the process nodes can catch-up.
Waiting for the geforce versions so I can run realhack registry hack and make software think it's a quadro
I read that 10Grays/sec more as a fillrate that won't be achievable in most conditions. Even if the dataset fits in memory, the memory access is likely incoherent and limiting performance. As you said, any bounces will curtail performance significantly. Still a nice feature, but the rest of the hardware doesn't seem built to really push the feature. It's just another relatively dumb tensor core that can brute force highly linear workflows in parallel.
It would be curious to see if AMD's HBCC tech would have any effect on this to reduce effective memory footprint. NVLink supports system memory access, but the hardware paging is a bit different and only worked with compatible (Power) architectures last I checked. Vega on a Threadripper/Epyc with all the attached system memory channels might be able to pull it off with the paging. Setting aside ray performance for the moment, which may be insignificant compared to random access. Nvidia might be able to use the same processor, but has x86 issues with the memory controller. NVSwitch and multiple GPUs are the only way to scale memory capacity, possibly without paging as it should be more along the lines of direct access with limited bandwidth.
A quick search didn't turn up any tests of HBCC with the island data set.
They can combine all the cards they want until the switch runs out of aggregate bandwidth. Problem being that as number of cards increases they become increasingly limited by effective memory bandwidth as it approaches NVLink bandwidth of 100GB's or thereabout. Far from the 600+GB's the cards have individually. Limit the distance rays can travel and paging would be more realistic and performance likely acceptable.
Fuck it. It's a monopoly. Consoles be damned. ATI use to be able to follow right away or lead off back in the day. This is just so damn sad. Watching them since AMD took over slowly fall down the rabbit hole to fucksville has been horrible for those of us from back in the Hay Day of GPU/CPU wars. My last good card was the Radeon XTX1900. Ringbus! Tru-Form! All kinds of new innovative stuff that Nvidia introduces today use to come out of ATI at their launches.
I keep reading 2020 is going to be their new architecture, kick ass card to really compete. YOUR SCREWED AMD in gpu department. Why? Oh a little thing called INTEL+RAJA+100 times the R&D investment you don't have in the same exact time frame. The guy didn't have the R&D he has now. He said hardware and software guys didn't work well together over at AMD. Gpu wars again? Hardly. More like an out right ass kicking from the front and back. Remember Fury was going to kickass, right the ship. Polaris kept them a float. Vega competed 15 months after the 1080 launched.
Now were are getting all kinds of slides talking launch dates for Navi 7nm or Vega at 7nm blah blah blah. And no they can not take R&D money from the CPU side to help, they have to keep the pressure on Intel cpu wise.
Sorry, rant over. I am an Ole' ATI fan that has watched this shit for almost 20 years and it is literally disgusting what has happened to them.
So this is some great glorious epiphany that just came to you? Sorry but it's going to take a bit longer to clean up Raja's fkn mess. The whole RTG is still being re-organized and it's going to take some time. Put your little rant behind you and try to be a little more positive. AMD doesn't always have to counter with a product on every launch to be successful - Ryzen shows that. With increased Ryzen revenue, maybe AMD can steer some of that to the RTG group. Yes, we are years away but If anyone can do it, Su can.
Yes, I loved my 9700 PRO too.
So, trying to livestream it didn't work. Then, I realized I was using my AMD gpu computer. I swapped over to my Nvidia rig and, lo and behold, I saw this guy in a leather jacket shilling a new card.
I think they're actively blocking non-Nvidia users. GPP?
Okay, I was totally kidding. My life is too busy to watch a sales pitch. That's why I [H].
Didn't watch the stream but reading thru this thread learned a little bit more about ray tracing. Its beginning to become clearer why NV has had some unusual vram sizes on their cards. I've wondered because the clocks are not even close enough to keep a playable frame rate for current games when they use it at 4k with max settings. Most games barely tap 4-6gb in 1440p and that's exaggerating a little. So ray tracing will consume this vram then. So the remaining tricks that were all guessing about are the new clocks/cores/frequencies. All the vram in the world will still mean nothing if you can't process that information in an efficient time span.
BTW checking some other sites last night I did see that NV is going to be holding a special GeoForce event right before Gamescom.
We're almost halfway there to fitting that island's attributes in a single card's vram at 48gb. And for FX artists with lower fidelity requirements, it's much easier to accomplish. So yes, I see this gaining ground with lower-end FX artists, and eventually movie artists.
JenSun dramatizing: 48Gigabyte frambuffer, it's just Gigantic... it's like 4 times the size of a highend Quadro board.. for just.. 10.000$...... (pause... wait for it....) IT's A STEAL ! Crowd laughing
Where have you been?
Funny his speech is in my hometown. Hopefully something interesting to say.
So RTX = Ray Tracing Xtreme?
I would say Ray Tracing Xcelerator.
Should have gotten a 8800gt
I'm in vancouver.. should I go down to this?
When nvidia demoed the SW ray tracing last march, some people complained it would take years for that to be rendered on a single card.... It took less than 6 months.
Food for thought
Ummmm the 8800gt was released over a year later than the x1900xtx.........
So many people here not getting what Siggraph is about (QUADRO / Tesla not GeForce ).
Keep posting, you crack me up
And while you are at...google what the P6000 launched at...might give you a reality check.
That's what I get for forcing myself to periodically not look at things on the 'net. I should know better, Kyle, Cagey, Brent, megalith, and the whole team cover just about everything.