Nvidia SIGGRAPH 2018 Livestream Starts at 4 P.M. PT

It sounds funny when you enable the audio on all of the stream links posted and quoted all at once.
 
I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.

Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 110GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).


One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.

Not sure if they can combine more than 2 cards.
 
One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.

Not sure if they can combine more than 2 cards.

Pretty close. It's hard to guess how well things will scale over NVLink (it's much faster than PCIe but much slower than HBM), but I think the island dataset will encourage new research in memory-scalable algorithms - my impression was that the research community was pretty unaware of the size of production data before the island scene was released.

As more of the presentation unfolds it seems pretty clear that they are targeting real-time design and visualization, where the data are much smaller and being able to physically simulate lighting is useful. Those cars and buildings can't be more than a few GB in size, so they would even fit on a smaller version of the Quadro (48GB of GDDR6 is not cheap!)
 
Last edited:
The NASDAQ press release has some more info:

https://www.nasdaq.com/press-releas...tx-worlds-first-raytracing-gpu-20180813-00977

I assume the RTX5000 with half the RAM is what is going to be the 2080 announced next week.
If the rumors are true and a Titan also gets revealed then it's probably going to be a crippled RTX6000.

Not sure. I don't think it will have all the 4608 cores. That will be reserved for the Ti. Likely little over 3000 core for 2080 and and full core or little less for the Ti
 
Not sure. I don't think it will have all the 4608 cores. That will be reserved for the Ti. Likely little over 3000 core for 2080 and and full core or little less for the Ti
I don't think we are disagreeing since the RTX5000 is listed as 3072 CUDA cores in that press release.
 
Founders' Edition Leather Jackets, only.

theyre "faceted"

IMG_1065-a-ma.jpg


and none of the imbedded streams work...

edited speeling
 
Bet he wishes he had a CPU to talk about
He doesn't need to compete with CPUs. Whatever the server is running (Intel, AMD, whatever) it'll have a NVIDIA GPU plugged in running Tensor and RTX cores. No need for competition, this is NVIDIA, remember?
 
Doesn't nvidia normally wait until a few months after releasing quadros to release geforce? Might be different this time.
 
Doesn't nvidia normally wait until a few months after releasing quadros to release geforce? Might be different this time.

Maybe we get the leftover volta's now that they have turing for the proffesionals.
 
I wonder if he was feeling alright... I didn't hear a single "the more you buy, the more you save"...
 
Maybe we get the leftover volta's now that they have turing for the proffesionals.

Turing looks like it is just an evolution of volta. Still having tensor cores. We will see, apparently nvidia is teasing gaming GPUs for gamescon. WCCFTECH has an article them releasing a teaser picture.
 
So it’s 10K , 6.3K, and 2.3k for RTX8000, 6000, 5000 respectively. Volta cost 8K.
 
One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.

Not sure if they can combine more than 2 cards.

They announced a server with eight of the high end RTX 8000 cards. That'd be 384 GB of memory.

I'm not even sure that that is a really limit either as they have nvSwitch (PDF) to gain even more GPUs together. So even if each GPU only has one nvLink bus, nvSwitch would permit up to 16 GPUs.

For reference, GV100 has six nvLink buses and combine with the nvSwitch, a 90 GPU system could be built with each other GPU only one single hop through an nvSwitch away from each other. That isn't the true limit either as each nvSwitch can also communicate with an other nvSwitch but poor bandwidth and latency between nodes wouldn't make it worth trying.

So it’s 10K , 6.3K, and 2.3k for RTX8000, 6000, 5000 respectively. Volta cost 8K.

I thought Quadro GV100 was $8999.
 
The price is reasonable , assuming it can really bring something to the table with that DirectX Raytracing unit. That seems tom be Nvidia' s only hope for future consumer growth, now that games already look pretty impressive on top-end Pascal released 1.5 years ago.

I would hope that the gaming versions of these (October) will pack maybe half the tensor/RT performance, along with a more reasonable die size/price. I imagine they may make RT acceleration a top-end only feature only for Turing GT102, and POSSIBLY GT104 consumer GPUs. Everything below that would just be a check-box wasting die space.

But with this it's pretty clear where both Nvidia's professional and their consumer lines are headed, once the process nodes can catch-up.
 
Last edited:
I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.
I read that 10Grays/sec more as a fillrate that won't be achievable in most conditions. Even if the dataset fits in memory, the memory access is likely incoherent and limiting performance. As you said, any bounces will curtail performance significantly. Still a nice feature, but the rest of the hardware doesn't seem built to really push the feature. It's just another relatively dumb tensor core that can brute force highly linear workflows in parallel.

Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 100GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).
It would be curious to see if AMD's HBCC tech would have any effect on this to reduce effective memory footprint. NVLink supports system memory access, but the hardware paging is a bit different and only worked with compatible (Power) architectures last I checked. Vega on a Threadripper/Epyc with all the attached system memory channels might be able to pull it off with the paging. Setting aside ray performance for the moment, which may be insignificant compared to random access. Nvidia might be able to use the same processor, but has x86 issues with the memory controller. NVSwitch and multiple GPUs are the only way to scale memory capacity, possibly without paging as it should be more along the lines of direct access with limited bandwidth.

A quick search didn't turn up any tests of HBCC with the island data set.

One of the slides had 48+48, which I'm pretty sure referred to memory, so the Quadro will have up to 48GB, and "+" is for NVlink for combining other cards memory. So pretty close.

Not sure if they can combine more than 2 cards.
They can combine all the cards they want until the switch runs out of aggregate bandwidth. Problem being that as number of cards increases they become increasingly limited by effective memory bandwidth as it approaches NVLink bandwidth of 100GB's or thereabout. Far from the 600+GB's the cards have individually. Limit the distance rays can travel and paging would be more realistic and performance likely acceptable.
 
  • Like
Reactions: bwang
like this
Fuck it. It's a monopoly. Consoles be damned. ATI use to be able to follow right away or lead off back in the day. This is just so damn sad. Watching them since AMD took over slowly fall down the rabbit hole to fucksville has been horrible for those of us from back in the Hay Day of GPU/CPU wars. My last good card was the Radeon XTX1900. Ringbus! Tru-Form! All kinds of new innovative stuff that Nvidia introduces today use to come out of ATI at their launches.

I keep reading 2020 is going to be their new architecture, kick ass card to really compete. YOUR SCREWED AMD in gpu department. Why? Oh a little thing called INTEL+RAJA+100 times the R&D investment you don't have in the same exact time frame. The guy didn't have the R&D he has now. He said hardware and software guys didn't work well together over at AMD. Gpu wars again? Hardly. More like an out right ass kicking from the front and back. Remember Fury was going to kickass, right the ship. Polaris kept them a float. Vega competed 15 months after the 1080 launched.
Now were are getting all kinds of slides talking launch dates for Navi 7nm or Vega at 7nm blah blah blah. And no they can not take R&D money from the CPU side to help, they have to keep the pressure on Intel cpu wise.

Sorry, rant over. I am an Ole' ATI fan that has watched this shit for almost 20 years and it is literally disgusting what has happened to them.
 
Fuck it. It's a monopoly. Consoles be damned. ATI use to be able to follow right away or lead off back in the day. This is just so damn sad. Watching them since AMD took over slowly fall down the rabbit hole to fucksville has been horrible for those of us from back in the Hay Day of GPU/CPU wars..

So this is some great glorious epiphany that just came to you? Sorry but it's going to take a bit longer to clean up Raja's fkn mess. The whole RTG is still being re-organized and it's going to take some time. Put your little rant behind you and try to be a little more positive. AMD doesn't always have to counter with a product on every launch to be successful - Ryzen shows that. With increased Ryzen revenue, maybe AMD can steer some of that to the RTG group. Yes, we are years away but If anyone can do it, Su can.

Yes, I loved my 9700 PRO too.
 
So, trying to livestream it didn't work. Then, I realized I was using my AMD gpu computer. I swapped over to my Nvidia rig and, lo and behold, I saw this guy in a leather jacket shilling a new card.

I think they're actively blocking non-Nvidia users. GPP?

;)

Okay, I was totally kidding. My life is too busy to watch a sales pitch. That's why I [H].
 
Didn't watch the stream but reading thru this thread learned a little bit more about ray tracing. Its beginning to become clearer why NV has had some unusual vram sizes on their cards. I've wondered because the clocks are not even close enough to keep a playable frame rate for current games when they use it at 4k with max settings. Most games barely tap 4-6gb in 1440p and that's exaggerating a little. So ray tracing will consume this vram then. So the remaining tricks that were all guessing about are the new clocks/cores/frequencies. All the vram in the world will still mean nothing if you can't process that information in an efficient time span.
 
BTW checking some other sites last night I did see that NV is going to be holding a special GeoForce event right before Gamescom.
 
I wonder how useful RTX will be in practice - 10Grays/sec is really fast, it means that an 8K scene with 10 bounces and 128 samples per pixel renders in seconds, not 10's of minutes. BUT, and it's a huge BUT, the scene will have to all fit in GPU memory for it to be fast - accelerating BVH lookups and intersection calculations only gets you so far if you have to go back to the CPU for lighting, material, and texture calculations. You can't split up the data either - secondary rays can go anywhere, and there would be a huge loss of performance if you had to go retrieve models from main memory every other secondary ray.

Being able to render Cornell boxes and teapots at high speed is cute, and certainly has positive implications for visualization and product design workflows, but the recent island data set from Disney shows that production rendering is done on really, really big datasets, not the <8GB stuff that hobbyists and small firms deal with. (The island requires over 100GB of memory to render, and it will be a long, long time before GPU's with 128GB of VRAM exist).

We're almost halfway there to fitting that island's attributes in a single card's vram at 48gb. And for FX artists with lower fidelity requirements, it's much easier to accomplish. So yes, I see this gaining ground with lower-end FX artists, and eventually movie artists.
 
Last edited:
When nvidia demoed the SW ray tracing last march, some people complained it would take years for that to be rendered on a single card.... It took less than 6 months.

Food for thought
 
So many people here not getting what Siggraph is about (QUADRO / Tesla not GeForce ).
Keep posting, you crack me up :ROFLMAO::ROFLMAO::ROFLMAO:

And while you are at...google what the P6000 launched at...might give you a reality check.

"Gamers"...so entitled :ROFLMAO::ROFLMAO::ROFLMAO:
 
That's what I get for forcing myself to periodically not look at things on the 'net. I should know better, Kyle, Cagey, Brent, megalith, and the whole team cover just about everything.

Thanks :)
 
Back
Top