NVIDIA Launches 12GB Tesla K40 Accelerator

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
NVIDIA today unveiled the NVIDIA® Tesla® K40 GPU accelerator, the world's highest performance accelerator ever built, delivering extreme performance to a widening range of scientific, engineering, high performance computing (HPC) and enterprise applications. NVIDIA Tesla K40 GPU Accelerator -- World's Highest Performance Accelerator Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than today's fastest CPU, the Tesla K40 GPU is the world's first and highest-performance accelerator optimized for big data analytics and large-scale scientific workloads. Featuring intelligent NVIDIA GPU Boost™ technology, which converts power headroom into a user-controlled performance boost, the Tesla K40 GPU accelerator enables users to unlock the untapped performance of a broad range of applications.
 
131118
? why the big deal K40 doing 1.4Tflop when the r9 290x does 5.6Tflop ?
It does not even have a monitor output, and the older K20 cost like us$3000.
 
High Performance, lol how many times can you say it in one paragraph?
 
131118
? why the big deal K40 doing 1.4Tflop when the r9 290x does 5.6Tflop ?
It does not even have a monitor output, and the older K20 cost like us$3000.

Not sure you don't know what you're talking about and making a sarcastic remark that is just stupid, or you don't know and are actually asking. I think its the former but:

The K40 does 1.4TF of with double precision, a 290X does 5TF single precision, but only ~0.6TF double precision. The 280X is faster than a 290X at DP. At least for theoretical perf, real world maybe 290X is faster? In any case here is FP64 benchmark:

2kPmWJP.png


For compute clusters FP64 performance what almost everyone cares about.
 
Yeah, useful for scientific workloads dealing with large data-sets.

...until we get games that actually have some assets :). Granted, 6GB should be enough, or 8GB if you have a 256bit/512bit memory controller like GK104 or HI, but memory is something that most would rather over-shoot than under-shoot.
 
...until we get games that actually have some assets :).
Just because you can store 12GB worth of assets doesn't mean the GPU is fast enough to render a videogame using said 12GB-worth of assets.

For example, see the various GeForce FX 5200 cards with 1GB of RAM on-board. Totally pointless because any game that uses 1GB of video RAM effectively is also WAY too GPU-intensive to run at acceptable speed.

Same situation here, as far as I can tell. BF4 has already shown us that Kepler is barely fast enough for games that you don't even consider to be "next gen"...Just the GPU itself, without even factoring in any RAM limitations, isn't going to cut it soon. More RAM wont change that.
 
Load up Skyrim with mods. Notably, the game can be made to use tremendous amounts of VRAM without killing GPU performance. I don't see 'next-gen' games as being much different now that the consoles which form the basis of future games are so well equipped.
 
Just because you can store 12GB worth of assets doesn't mean the GPU is fast enough to render a videogame using said 12GB-worth of assets.

For example, see the various GeForce FX 5200 cards with 1GB of RAM on-board. Totally pointless because any game that uses 1GB of video RAM effectively is also WAY too GPU-intensive to run at acceptable speed.

Same situation here, as far as I can tell. BF4 has already shown us that Kepler is barely fast enough for games that you don't even consider to be "next gen"...Just the GPU itself, without even factoring in any RAM limitations, isn't going to cut it soon. More RAM wont change that.

This applies to both AMD and Nvidia's current gen.

12GB is used for CUDA oriented applications. You can always program your software to use a large and extremely fast cache for chunking huge amounts of data. Scientific simulations come to mind.
 
...until we get games that actually have some assets :). Granted, 6GB should be enough, or 8GB if you have a 256bit/512bit memory controller like GK104 or HI, but memory is something that most would rather over-shoot than under-shoot.

Meh, we already have 25-30 GB installs...don't really need/want 100 GB games with uncompressed textures.

VRAM isn't everything. I still haven't hit the limit on my 2 GB card at 1080p (except at insane AA levels like 16x or something).
 
VRAM isn't everything. I still haven't hit the limit on my 2 GB card at 1080p (except at insane AA levels like 16x or something).

It's not the size of the install- it's the size of textures in flight, which don't get multiplied by things like AA, among other assets.
 
Meh, we already have 25-30 GB installs...don't really need/want 100 GB games with uncompressed textures.

VRAM isn't everything. I still haven't hit the limit on my 2 GB card at 1080p (except at insane AA levels like 16x or something).

The reason you haven't "hit the limit" on your 2GB card is because DirectX and the game engine knows you only have have 2GB and will limit things accordingly to keep the Vram buffer from overflowing and causing massive performance issues by swapping to the disk non stop..
 
Load up Skyrim with mods. Notably, the game can be made to use tremendous amounts of VRAM without killing GPU performance. I don't see 'next-gen' games as being much different now that the consoles which form the basis of future games are so well equipped.
Not really sure what you're on about. Loading up Skyrim with texture packs will start to tank GPU performance long before you hit 3GB of video RAM in-use (especially if you're running 4k / Surround). I suppose you could run a tiny screen resolution to give yourself a chance to overflow video RAM before GPU performance becomes unacceptable, but that's an extremely artificial scenario.

Higher resolution textures (this includes the flat texture as well as normal maps, specular maps, bump maps, emissive maps, etc) put additional strain on the GPU by effectively increasing the precision that many effects must be calculated at. For example, take parallax mapping. Increase the normal map resolution for an object from 512x512 to 4096x4096 (as many texture packs do), and suddenly you're calculating parallax mapping on that object using 64 times more data / precision. This is NOT free.

And then there are textures that are part of shaders, as is commonly seen in UnrealEngine 3 games. You'll often see between 2 and 5 textures grouped together with various real-time blending and pixel-shader effects employed. This is then mapped to an object (again, being continuously rendered real-time, not baked-and-applied). Increasing texture resolution increases the amount of data that must be calculated by the GPU for these effects to take place by many times. This is also NOT free.

Even things as simple as anisotropic filtering become more intense as you increase texture resolution. Once again, no free ride...

And all of this is amplified by increased screen resolution, which prevents the use of any rendering tricks that attempt to calculate only for displayable pixels (higher screen resolution necessitates higher precision on more objects). Aspect ratio can also amplify all of the above by bringing many times more objects and surfaces into view at once.

I've really not understanding how you keep jumping back to the idea that increasing texture resolution has no additional consequences besides additional video RAM being used. That scenario is almost never true in real-world gaming. The only place where that line of thinking might apply to any appreciable degree is older games that use nothing but simple flat textures.
 
Last edited:
It's not free- at all, you're definitely right about that. The point is, as you've pointed out, that VRAM scaling isn't linear with resolution increases, and in particular, that increases in resolution are only a small part of the need for more VRAM in GPUs.

We'll definitely need more rendering power, sure, but unlike the lack of VRAM, rendering power can be increased by adding GPUs :D.
 
Back
Top