SLI is Dead. Well, Try 1,280 GPUs!!!

Nausicaa

Weaksauce
Joined
Mar 9, 2015
Messages
123
Playing space invaders for eternity sounds like hell to me.

On a serious note, you know why this exists, for game streaming companies like Shadow, and Nvidia's own service.
Today google announced they were using AMD with vulkan and linux. Glad they chose something more open rather than nvidia not allowing anything but teslas in data centers.
 
D

Deleted member 126051

Guest
Well, that could net me about 80-90 million work units a day on Distributed.net's RC5-72 challenge...

Using the low-end figure.
It'd take me from where I'm at (#229 overall currently) and rocket me up to #60 in one day.
It'd take about 2 weeks to to hit #1.
 
Last edited by a moderator:

Dodge245

Limp Gawd
Joined
Oct 8, 2018
Messages
186
Today google announced they were using AMD with vulkan and linux. Glad they chose something more open rather than nvidia not allowing anything but teslas in data centers.

That's good to hear, and a bit surprising, Logic would suggest that a datacentre would want best performance per watt, didn't think the Radeon Cards did so well in terms of power usage. On the other hand, Google is very "Pro" Open Source, so their choice does make sense, and I would also imagine Google may start to try and provide input/expand on Vulkan.
 

cybereality

Supreme [H]ardness
Joined
Mar 22, 2008
Messages
6,444
They are using custom AMD chips, so it's possible power usage is better than the consumer available cards.
 

XoR_

[H]ard|Gawd
Joined
Jan 18, 2016
Messages
1,062
Input lag though and latency issues.
Not if single frame rendering was used.
Since each ray is calculated separately they could assign each GPU small portion of rays of single eg. 8K frame and run everything in parallel and achieve real-time path tracing with many samples per pixel and still get normal latency.
 

NoOther

Supreme [H]ardness
Joined
May 14, 2008
Messages
6,468
Today google announced they were using AMD with vulkan and linux. Glad they chose something more open rather than nvidia not allowing anything but teslas in data centers.

Not sure what you are trying to suggest here? I know tons of mixed Datacenters. Also Nvidia has a lot of resources to do open GPU development. I think you may be confusing different issues.
 
Last edited:

Nausicaa

Weaksauce
Joined
Mar 9, 2015
Messages
123
That's good to hear, and a bit surprising, Logic would suggest that a datacentre would want best performance per watt, didn't think the Radeon Cards did so well in terms of power usage. On the other hand, Google is very "Pro" Open Source, so their choice does make sense, and I would also imagine Google may start to try and provide input/expand on Vulkan.
Nvidia makes proprietary hardware that solves problems (gsync was great but expensive now they are supporting freesync too), while amd beats it in raw power. The Radeon VII is very powerful, its made for workstations but its sold as a gaming GPU.
 

Zepher

[H]ipster Replacement
Joined
Sep 29, 2001
Messages
17,785
I wonder if you get a discount on the RTX6000 cards if you buy 1280 of them, $5 million at list price just for the cards.
 

MyNameIsAlex

Limp Gawd
Joined
Mar 10, 2019
Messages
313
That's good to hear, and a bit surprising, Logic would suggest that a datacentre would want best performance per watt, didn't think the Radeon Cards did so well in terms of power usage. On the other hand, Google is very "Pro" Open Source, so their choice does make sense, and I would also imagine Google may start to try and provide input/expand on Vulkan.

The most logical conclusion I drew is that their electricity is so cheap that it's not more important than depreciation.

It would make sense to build a datacenter next to a coal plant or someshit tbh
 

raz-0

Supreme [H]ardness
Joined
Mar 9, 2003
Messages
4,747
Playing space invaders for eternity sounds like hell to me.

On a serious note, you know why this exists, for game streaming companies like Shadow, and Nvidia's own service.

It's not just that. Doing some quick back of the envelope math, it's more modular and cost competitive with some serious HPC offerings. And with tensor cores, probably more broadly useful, especially if you can't afford the full deployment. In which case the competition falls off greatly. The only thing it's hard to determine is what the power and HVAC cost per rack is.
 

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,466
GPU’s are getting more and more common in our datacenters.
We are looking into 2U servers with 4 Tesla’s per servers.
(You cannot just slam 21 servers in a 42U rack...power and cooling needs to be up to par) but I am seeing a shift...GPU’s are moving in, CPU’s are losing importance...no wonder Intel is going into GPU’s...do or die.

The coming “battle” is going to be fun.
 
Top