Separate names with a comma.
Discussion in 'Video Cards' started by cybereality, Mar 18, 2019.
a kidney a lung and a left testicle ought to be enough.
They can take my whole body. I bet 1,280 RTX cards is enough to upload my brain...
Playing space invaders for eternity sounds like hell to me.
On a serious note, you know why this exists, for game streaming companies like Shadow, and Nvidia's own service.
Today google announced they were using AMD with vulkan and linux. Glad they chose something more open rather than nvidia not allowing anything but teslas in data centers.
Big win for AMD. And Linux Vulkan as well.
Funny how well this describes our universe...
it's 40 GPUs per server. So maybe you can play Crysis 3 at 16K.
All you need is one of those new LG 8K OLEDs and Nvidia DSR to 16K.
Well, that could net me about 80-90 million work units a day on Distributed.net's RC5-72 challenge...
Using the low-end figure.
It'd take me from where I'm at (#229 overall currently) and rocket me up to #60 in one day.
It'd take about 2 weeks to to hit #1.
Input lag though and latency issues.
That's good to hear, and a bit surprising, Logic would suggest that a datacentre would want best performance per watt, didn't think the Radeon Cards did so well in terms of power usage. On the other hand, Google is very "Pro" Open Source, so their choice does make sense, and I would also imagine Google may start to try and provide input/expand on Vulkan.
They are using custom AMD chips, so it's possible power usage is better than the consumer available cards.
I'll take two..checks pockets and only finds a quarter. hmm
Whats cool is, towards the bottom they disclose RTX frame rates in BFV - impressively they've finally managed to push 4K 60 FPS with only 1280 GPUs!
Not if single frame rendering was used.
Since each ray is calculated separately they could assign each GPU small portion of rays of single eg. 8K frame and run everything in parallel and achieve real-time path tracing with many samples per pixel and still get normal latency.
Not sure what you are trying to suggest here? I know tons of mixed Datacenters. Also Nvidia has a lot of resources to do open GPU development. I think you may be confusing different issues.
SLI is dead...these won't use SLI, so I am confused what SLI has to do with this announcement at all? In fact they don't even seem to have any GeForce cards as part of these servers.
Yeah, I realize this isn't SLI. It was a joke.
Nvidia makes proprietary hardware that solves problems (gsync was great but expensive now they are supporting freesync too), while amd beats it in raw power. The Radeon VII is very powerful, its made for workstations but its sold as a gaming GPU.
I wonder if you get a discount on the RTX6000 cards if you buy 1280 of them, $5 million at list price just for the cards.
The most logical conclusion I drew is that their electricity is so cheap that it's not more important than depreciation.
It would make sense to build a datacenter next to a coal plant or someshit tbh
It's not just that. Doing some quick back of the envelope math, it's more modular and cost competitive with some serious HPC offerings. And with tensor cores, probably more broadly useful, especially if you can't afford the full deployment. In which case the competition falls off greatly. The only thing it's hard to determine is what the power and HVAC cost per rack is.
GPU’s are getting more and more common in our datacenters.
We are looking into 2U servers with 4 Tesla’s per servers.
(You cannot just slam 21 servers in a 42U rack...power and cooling needs to be up to par) but I am seeing a shift...GPU’s are moving in, CPU’s are losing importance...no wonder Intel is going into GPU’s...do or die.
The coming “battle” is going to be fun.