Geforce GTX Titan - First Pics & True Specs

Why not? It's been built from binned K20x and K20 parts?

It won't function as a quadro or tesla without the applicable software and drivers, there is a large difference in the software and support ecosystem that nvidia provides to firms that purchase workstation cards. You get support direct from nvidia with a tesla or quadro purchase.

With geforce drivers, it won't have workstation acceleration features that the K20/X do. Note that compute and workstation features/acceleration aren't necessarily the same thing - what I mean is acceleration for 3d apps such as autocad, 3ds max, adobe suite, etc. You still cannot do that without a proper tesla/quadro card.
 
Last edited:
Interesting read on the titan at wccf tech... Looks pretty legit, and if true, maybe there will be some future version tweaked Titans coming from 3rd parties....Come on Feb 18th!!


Furthermore, if one card is not enough for your needs (which it will indeed) than you can always add three more Titans to your rig since the GeForce GTX Titan support Quad Way SLI support. A 6+2 phase VRM powers the chip and memory which are powered by an 6 + 8 Pin connectors. The most notable thing to spot over the GeForce GTX Titan board is that there’s a further space for a 8-Pin connector. This could point to only one thing and that would be custom models by AIB partners. Initially, we were told that the card would be locked by NVIDIA in the same manner the GeForce GTX 690 was allowing no voltage control or custom models from AIB but if NVIDIA does allows it with their GeForce GTX Titan than enthusiasts could be in for a great treat. A voltage regulation unit is situated on the front side of the PCB reallocated from the backside where it was spotted on the GeForce GTX 680

Read more: http://wccftech.com/nvidia-geforce-...consumers-blasting-6-gb-memory/#ixzz2LCNsqA8e


LINK- http://wccftech.com/nvidia-geforce-gtx-titan-pictured-gk110-finally-arrives-consumers-blasting-6-gb-memory/
 
Are you freakin kidding me? Are you playing on a 27" 1440p monitor in your sig? If so you'd just be wasting your money for more VRAM. I can guarantee you that. I play at 1600p and I'm fine with 2gig vram. I never exceeded my vram, even if I did, I couldn't tell.
I also have a 5760x1200 Surround setup, and I play a lot of Skyrim where I could use the extra VRAM for running MSAA+high res texture packs.
 
Interesting. I guess the main question is that will 2 of these cards be faster than 2 690's or 4 680's due to SLI scaling gimpness. I hope the reviews make this comparison. I guess you also have the benefit though of higher VRAM, less heat/noise, possibility of 3/4 way sli, and increased room on your motherboard if you only have 2 cards instead of 3/4.
 
Anyone have any thoughts on CPU bottlenecking 2 or more Titans?
 
I also have a 5760x1200 Surround setup, and I play a lot of Skyrim where I could use the extra VRAM for running MSAA+high res texture packs.

Good for you bro. I'm glad you need more than 2gigs for Ultra high AA virtual surround gaming on ONE DAMN GAME THAT YOU HAVE TO MOD TO USE IT!!! LOL I'm so sick of hearing people say that. If you think spending more money on more vram for one game you play than yes you deserve the [H] reward, but many people are more realistic and just want a FASTER GPU DAMNIT!!! :D sorry /rant:
 
I don't think it's very common for CPU bottlenecking affecting playability. Who cares if it bottlenecks your fps above your refresh rate? I don't.
 
Good for you bro. I'm glad you need more than 2gigs for Ultra high AA virtual surround gaming on ONE DAMN GAME THAT YOU HAVE TO MOD TO USE IT!!! LOL I'm so sick of hearing people say that. If you think spending more money on more vram for one game you play than yes you deserve the [H] reward, but many people are more realistic and just want a FASTER GPU DAMNIT!!! :D sorry /rant:
What exactly are you going on about? I was very clear in my original post that I wanted more RAM for my own personal needs:

I'd like a bit more VRAM but don't want to swap to 4GB 680s

I'm fine for now. I just like upgrading.
I'm aware that 99% of most people are not going to exceed 2GB VRAM at this point in time. I even said I don't want to swap to 4GB 680s, precisely because the cost:benefit isn't worth it, even though I do have the money to make the switch if I wanted. I'd rather replace my i7 930 first. :p
 
It won't function as a quadro or tesla without the applicable software and drivers, there is a large difference in the software and support ecosystem that nvidia provides to firms that purchase workstation cards. You get support direct from nvidia with a tesla or quadro purchase.

With geforce drivers, it won't have workstation acceleration features that the K20/X do. Note that compute and workstation features/acceleration aren't necessarily the same thing - what I mean is acceleration for 3d apps such as autocad, 3ds max, adobe suite, etc. You still cannot do that without a proper tesla/quadro card.

I know what you are saying but, people who buy workstation cards know this. The software and stuff is out there for the K20. I know it was done with Fermi, there was some sort of hack that unlocked the DP cores and let you use the Tesla software. Of course you do lose out on Nvidia support :p
 
Still curious as to whether Kyle and company have a card :( I guess they can't say though.
 
Ignore the value of $900. The real question is what you get out of $1800 for two of these in SLI. A pair of GTX 690 in Quad-SLI costs $2000, and I'll bet that two Titans in SLI will crush them for $200 less.
 
Ignore the value of $900. The real question is what you get out of $1800 for two of these in SLI. A pair of GTX 690 in Quad-SLI costs $2000, and I'll bet that two Titans in SLI will crush them for $200 less.

Leaks coming out are saying Titan is only around 50% faster then a 680 GTX.

So 2 titans will beat 1 690 GTX, but not 2 690 GTX's.
 
Leaks coming out are saying Titan is only around 50% faster then a 680 GTX.

So 2 titans will beat 1 690 GTX, but not 2 690 GTX's.

That will depend on A.) how well Titan overclocks and B.) how well Titan scales in SLI because you have to remember that 2 GTX 690's don't scale that well in quad SLI. If Titan does turn out to be around 50% faster than a GTX 680 then two would equate to GTX 680 Tri SLI if they scale exactly the same. And what if two Titan's scale better than three GTX 680's in SLI? You also get 6GB of VRAM whereas the GTX 690 has been limited by only having 2GB.
 
Last edited:
That will depend on A.) how well Titan overclocks and B.) how well Titan scales in SLI. If Titan does turn out to be around 50% faster than a GTX 680 then two would equate to GTX 680 Tri SLI if they scale exactly the same. You also get 6GB of VRAM whereas the GTX 690 has been limited by only having 2GB.

Very true. I just wish they would release a cheaper titan with only 3gb of memory. You only need 6gb if you are running triple monitors with 8xAA or 3 1440/1600p monitors and want some AA.

Either way I am hoping its faster then only 50% then a 680 GTX. And Overclocking will be limited on the Titan, since Nvidia is controlling your voltage like they do on the 600 series cards :/
 
I think the hand wringing over the price is a bit silly. Yes, they are likely to be extremely expensive. It is a lot of money to make large chips (because you get a lot of failures, and you pay per wafer in fabrication, regardless of what you put on it) but also because that's how things go. You NEVER get good value for your money at the high end. Companies charge more because some people will pay a high premium to have the highest performance, regardless of the value per dollar.

So it should not at all be surprising that the price target is very high. They aren't planning on selling tons of these, the point is for the people that have to have the best, no matter what the cost, not the people who want the most for their money.

Want a good example? Look at Intel's LGA2011 processors. They are pricey period but the telling thing is the delta between the models: The i7-3930K gets you 6 cores at 3.2GHz each, that allow overclocking, and 12MB of L3 cache. That runs you about $570. The i7-3960X gets you 6 cores at 3.3GHz each, OCable, and 15MB of L3 cache. That runs you near $1100.

So almost twice the price and you get 100MHz more per core, and a little more L3 cache. There really is no situation that is worth the money. You'll see little to no performance gain in most cases. However, they still sell them. Some people want the best and can afford it. They'll pay more to have a bit more cache, and maybe to get a bit better OC.

Same kinda deal here most likely (we still don't know for sure if any of this is real or not). An extremely high end part, with a price to match. You'll get a much better deal for your money with a lower end part, this is for if you must have the best.
 
I think the hand wringing over the price is a bit silly. Yes, they are likely to be extremely expensive. It is a lot of money to make large chips (because you get a lot of failures, and you pay per wafer in fabrication, regardless of what you put on it) but also because that's how things go. You NEVER get good value for your money at the high end. Companies charge more because some people will pay a high premium to have the highest performance, regardless of the value per dollar.

So it should not at all be surprising that the price target is very high. They aren't planning on selling tons of these, the point is for the people that have to have the best, no matter what the cost, not the people who want the most for their money.

Want a good example? Look at Intel's LGA2011 processors. They are pricey period but the telling thing is the delta between the models: The i7-3930K gets you 6 cores at 3.2GHz each, that allow overclocking, and 12MB of L3 cache. That runs you about $570. The i7-3960X gets you 6 cores at 3.3GHz each, OCable, and 15MB of L3 cache. That runs you near $1100.

So almost twice the price and you get 100MHz more per core, and a little more L3 cache. There really is no situation that is worth the money. You'll see little to no performance gain in most cases. However, they still sell them. Some people want the best and can afford it. They'll pay more to have a bit more cache, and maybe to get a bit better OC.

Same kinda deal here most likely (we still don't know for sure if any of this is real or not). An extremely high end part, with a price to match. You'll get a much better deal for your money with a lower end part, this is for if you must have the best.



for you and you video game people no because games have been held back.... a $1000 lga 2011 cpu is not worth the money so people cna play a bunch of console ports. for people that need 3d rendering performance and need to pump out frames as fast as possible and can not go for a full xenon workstation the Core i7-3970X Extreme Edition cpu is the 2nd fastest thing you can buy for that.

this titan is also cheeper than the TESla gpu it is based on and that will let you render even more with cuda .

they will sell because the xenon cpu and nivda tesla cost more but these are close enough so they will sell them easy to many artist and or autocad hobby people nobody wants to sit around and wait any longer than they have to while working on something. small studios will buy these because they are cheeper than workstation level stuff and come close enough so time is money.


http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpecial/lagoa_multiphysics



for this yes the cpu is taxed at max and so are gpus any boost is worth the money you can also run high poly physics in real time in the view port and it will kick any cpu and gpus ass to do it and you have not even got to the vray or mental ray final rendering yet that is why they have render farms full of networked PCs for this at major studios.


http://www.youtube.com/watch?v=dwvukCY5xok


http://www.tomshardware.com/reviews/core-i7-3970x-sandy-bridge-e-benchmark,3348-6.html
 
Last edited:
Expert Reviews UK First to leak press release. Taken offline shortly after.

er_photo_184834_52.png


They are claiming the Titan is quiter than the 680. Find that hard to believe!

QcOyeKz.png

er_photo_184831_52.png


Nvidia has detailed its latest high end graphics card, the GTX Titan, confirming its existence with some impressive numbers that should cement its position as the world's fastest GPU.

Built around the GK110 Tesla technology first used in the Titan supercomputer - hence the name - the GTX Titan is Nvidia's most powerful graphics card to date, with a whopping 2,688 CUDA cores and 7.1 billion transistors, which produce 4,500 Gigaflops of processing power. Each card comes with 6GB of GDDR5 RAM, running along a 384-bit interface which should be more than sufficient for playing the latest games at above HD resolution. It's DirextX 11.1 compatible, so should support all the latest graphics tweaks such as tessellation, and Nvidia's own PhysX physics effects.

Designed as a single-GPU replacement for the current top-end GTX 690, which is actually two GTX 680 cores bolted to one PCB, the GTX Titan promises improved performance while using less power and producing less heat. A redesigned cooler with an extended aluminium heat stack dissipates heat faster than Nvidia's current design, while the 90mm fan is tied to both RPM and voltage control to more accurately determine when to kick in. With a TDP of 250w, you'll certainly need it.

SLI is fully supported, so if you have a capable power supply and bottomless pockets you could potentially run multiple Titans for high frame rates even at multi-monitor resolutions. Although Nvidia has yet to share exact benchmark results, some rough figures suggest games like Crysis 3, Far Cry 3 and Max Payne 3 can expect roughly twice the performance over a GTX690 setup.

Perhaps more exciting news is the addition of GPU Boost 2.0, an evolution of the software introduced with Nvidia's 600-series graphics cards. Built into the video driver, GPU Boost 2.0 will let Titan owners overclock and olvervolt their cards, with higher limits than with previous cards and optimisations for water-cooling setups.

It will also allow you to "overclock" your display, running it at a faster sync rate than it officially supports to squeeze out some extra frames per second. As an example, a monitor rated for 60Hz refresh only could run at up to 80Hz, meaning twenty extra frames per second are being displayed.

The one sticking price will almost certainly be the price - Nvidia would only confirm RRP pricing with us today, as it will be up to its hardware partners to set their own prices when the cards launch later this week, but you'll easily be paying over £800 per card. We'll have to wait until then to see whether the benchmark scores can back up Nvidia's claims that the Titan is the fastest card around, but the early indications look promising.

We already know the specs as confirmed in my original first post.

Core Count – 2688 Cudas
Memory – 6 GB GDDR5
Interface – 384-bit
Core Clock – 837 MHz
Boost Clock – 876 MHz
Power Interface – 6+8 Pin
 
Last edited:
that's over $1200. I'm guessing that includes the VAT and we can expect something closer to $900 stateside?

One can only hope. It's a toss up in my mind if they would price it lower or higher than the 690. They can justify it because it's one chip, exclusivity, blah blah blah, people will buy it either way. I wouldn't, however, if it were over the $899 price tag that I was previously expecting. At that point I'd much rather just go with a dual GPU set up.
 
I'm not shocked nVidia is not showing any slides comparing a single Titan vs a 690, as it wouldn't look so impressive.
 
Why do people keep bumping a secondary thread with the same info as a larger identical thread next to it? ;)
 
Why do people keep bumping a secondary thread with the same info as a larger identical thread next to it? ;)

Because the first post in that other thread is still using fictious info about the Titan (funny cause its title says final specs, which is WRONG) and there is pages and pages of wrong and outdated info. This thread is smaller and more organized and has nothing but up to date and current info.
 
that's over $1200. I'm guessing that includes the VAT and we can expect something closer to $900 stateside?

Yes. And one would hope $899 in North America. Lower of course would be better! :p

Watch out for certain retailers gouging up the ying-yang.
 
I know what you are saying but, people who buy workstation cards know this. The software and stuff is out there for the K20. I know it was done with Fermi, there was some sort of hack that unlocked the DP cores and let you use the Tesla software. Of course you do lose out on Nvidia support :p

There is really no hack. I'm not talking about taking a Titan and flashing it. There are certain features exclusive to the Quadro driver cards however that is not all.

For Adobe CS6 for example, you will get more performance out of a 480/580 than their 6XX and Quadro equivalents (Quadro 6000 equivalent to a 470 at a lower clock with 6GB of memory. For fun, look up how much a Quadro 6K cost.)

Any 4XX/5XX series performs so much better than NVidia's current line up. For Cuda the two things holding this card back from a K20 is the tesla driver that frees up the card from operating system, the ECC ram, and the unlocked double precisions. (For Quadro it's the 10-bit out for some expensive specialty monitors.)

For CUDA programming you will be able to advantage at all the latest CUDA library with this GPU in addition to having that 6GB of memory.Make no mistake, 580s are still bought today for that reason. People buy up those FERMI cards especially those wanting to get into compute. The Titan GPU was developed with compute as a priority. There is nothing wrong in buying a Geforce FERMI for Cuda. Yes, the Tesla's are designed to run in servers stably for extended periods of times. That's not the same use case for a workstation or someone learning CUDA at home.

This thing will smoke anything from NVidia's line up but it's tesla offering. That is, unless NVidia found another way to cripple the card even more so. Double precision is important but not absolutely depending on the use case.


I will be trying to buy a Titan and getting straight to work on CUDA. It's expensive however it's cheaper than a K20 and worth it for my resume.
 
Last edited:
What's also interesting is it places no mention on the limited availability or this rumored 10,000 being made number.

The performance to me says this will be sold for the better part of the year.
Nothing spectacular that point to this being a limited card.

35-50% over a 680 places this card in the $600-699 range.
Nvidia can charge whatever they want. But as the consumer, you have to do the math.
 
Back
Top