I thought that the GTS 250 was the same as the 8800 GTS 512, 9800 GTX and the 9800 GTX+ not the 8800 GTX
seriously? wow then count me in for 1.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I thought that the GTS 250 was the same as the 8800 GTS 512, 9800 GTX and the 9800 GTX+ not the 8800 GTX
They wouldn't but it won't be 100% faster at gaming.Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ...
Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..
What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ...
Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..
What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
*snorts*
Okay, that was funny I guess all those scientists, hospitals and businesses running high-performance tasks on GPUs are just confused
It's a real shame that nVidia is taking so long this time around. Fermi looks like the long-awaited shift towards a more generic GPU/vector processor design Intel has been hyping for years with its Larrabee. I have no doubt that Fermi as cGPU heralds the future of GPUs, it's just annoying that the future has been postponed for a few more months
yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)
Yeah, see the post above yours. I brain farted on that one.erm... Juniper is 5750 and 5770... Whatever happened to our NVDA stock argument? and Q1 vs. Q2 fermi release... Vengeance you're normally sharper than this...
Die shrunk and clocked higher, but pretty much.I thought that the GTS 250 was the same as the G92 based 8800 GTS 512, 9800 GTX and the 9800 GTX+ not the G80 based 8800 GTX
2x 285s would be faster and at 200$ each wouldn't be a bad deal.realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ...
$300 would be a loss leader for sure. I'd go snag one for that price. If they gave them away for free, they would take all focus off ATI's new chips too.Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..
You're hyperbowling but yeah. If Nvidia comes out with Femri, clocks it at 650Mhz, and sells it at 400$ they will have a shot depending on what the 5890 refresh cards look like. That'll put it on paper at 20% faster than a 5870 (2.4X a 280). However, thats high for Nvidia clocks, and it assumes a CUDA core is the same power as a GT200 core. I don't think it is a bad assumption to think a GT200 core is at least equal to a CUDA core clock for clock on paper. I suspect there will be a LOT in the way of driver optimization to be done. I don't think there is nearly as much room for the 5870 to mature because of how similar the architecture is to the 4870 in terms of preformance gains post launch.What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
It is MUCH more power. So much so that the power draw is a real case. I've never been a fan of people talking about 20 watts difference in power draw. This however is an order of magnitude different.yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)
The die cost is probably closer to 50-60$. However I agree they aren't going to sell it at 300$.at $100 a die for 295 its already 200+ just for the die cost.. then add everything else on top of it.. no way they can sell a 295 @ $300
Yeah...and how is that potential market working out for Nvidia so far?
what are you on crack or something 20 watt lol http://www.anandtech.com/video/showdoc.aspx?i=3643&p=26 60 watt at best i saw 80 watt diference in diferent reviews
A bonus to those people waiting for hard benchmark numbers on the GT300, if it doesn't come out until early next year (March/April) or maybe a bit later, prices on 5870 and possibly 5870x2 will have come down
Yes because Kyle and Charlie are both reknown leaders for inner workings of Nvidia and now what they are thinking and doing on a day to day basis.
Anything concerning Fermi should all be concidered FUD. Unless we see nothing by XMAS. Seems alot of people have fergotten Nvidia was very very quiet about about G80 until about 4 weeks til launch.
Charlie is a douche, claims Fermi(G300) wont be but 10-20% faster of 5870 and has claimed in the past performance wins for ATI over Nvidia for competeing generations. He is a paid AMD tool and should be ignored.
Nothing potential market. Professional GPUs like the Quadro have been selling well for years, now Tesla cards are selling like hotcakes in as far that's possible in a high profit margin professional market. Especially hospitals, clinics, research facilities, oil companies and HPC people are the biggest GPGPU market for nVidia at this point.
Actually, it's not cherry picking bullshit. Graphics make what % of AMD's sales? and Graphics makes what % of NVDA's sales? AMD investors have always been sweating cause AMD as a company sucks...bleeds money everywhere. However, IF Cyress failed, AMD probably would not. If Fermi failed, NVDA could easily kick the bucket and be on sale for a cheap price. You're statement is like comparing Intel with ATI. It's apples and oranges. If anything fruitful were coming in the coming months for both AMD or NVDA, they're stock prices would reflect it. Obviously there isnt, despite the success of the 5800's.Since september NVDA has lost 27%
Since the middle of October, AMD has lost 27% of it's value. It looks like AMD's investors are sweeting even more! Oh, wait It's just cherry picking bullshit that means nothing.
/sigh. Stop internet road raging for a min.what are you on crack or something 20 watt lol http://www.anandtech.com/video/showdoc.aspx?i=3643&p=26 60 watt at best i saw 80 watt diference in diferent reviews
If I do not see a product fro Nvidia by January, I will def be purchasing a 5870 or 5870x2 (if and when they do come out) Very disapointed by Nvidia this time around....
It's gonna be what-atleast a 3-5 month gap before NVidia puts something new out since the 58x0 architecture was released? forget by January, think of it this way theres gonna be a crapload of people who will take take their holiday money and buy a 58x0 card and the majority of those people probably will not make another vid card purchase for another 6 months after...Nvidia definitely loses this round.
That is if they can find 58xx cards before the holidays... Sigh... Want one more...
With Fermi, we may be looking at the "Playstation 3 of video cards".
Something a lot of people seem to be missing. Buying a 58xx right now is like playing whack a mole. And judging by what TSMC is saying, it's not going to be any easier to get a 5870 any time soon.
why should price's drop with no competition??
Latest Questions. One with an Answer from jen hsun. Remember if you want to keep fielding questions. use the original post in this thread.
Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?
Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and integration is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.
We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.
The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.
We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond programmable shading to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, computational graphics will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. Computational graphics" requires the GPU to have two personalities one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.
While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.
To enable game developers to create the next generation of amazing games, weve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. Weve created a tool platform called Nexus, which integrates into Visual Studio and is the worlds first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a co-processing configuration. And weve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.
Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.
NVIDIAs growth strategy is simple and singular: be the absolute best in the world in visual computing to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.
For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as the essential gear of gamers for universal domination is now off to really save the world.
Keep in touch.
Jensen
Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?
Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCLAPIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.
For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.
Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?
Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). Well share more details when we introduce Fermis graphics architecture shortly!
Like the opening in Ultraviolet?Can just imagine Jen-hsun in a ninja suit sabotaging TSMC machinery.
ChrisRay and Sean asked nVidia to help on the "software only Tessellator rumor".'Announcing soon!'. I guess maybe they will try to use some of those Cuda Cores to do tessolation? Who knows.
Personally, I think it was really great marketing spin and great marketing is as valuable as fertilizer...to someone who isn't a farmer...or a gardener...or well...you get the point. I think someone needs to ask in that thread, 'Why have we not seen a single benchmark of Fermi? Does Fermi play games?'
Got Nvidia to weaken a bit on their resolve about Fermi graphics. With our latest questions we pushed to have the Fermi Tessellator rumor put to silence for good.
Yes Fermi has a dedicated hardware for for tessellation. So glad I can stop biting my tongue about this one
Nvidia is keeping the "graphics" hardware discussion of Fermi close to their chest. Nobody knows its pipeline structure, TMUS ect. And the tessellation remarks came from rumors.
Me and Sean convinced Nvidia that the rumor needed to be taken seriously. Hence why it appeared in our latest round of questions.
Fermi's shader units are not "emulating" it. There is dedicated circuitry to hardware tessellation. Yes you can do tessellation via shaders. But that doesn't mean its the best way to go about doing it.
Read the financials. The professional Quadro business at nVidia is 0.2% of its revenue. The margins are nice, but the volume is tiny
Unless that market segment starts growing explosively starting Q1 of 2010, and keeps exploding for several quarters in a row, it will be irrelevant to Fermi's success
Maybe a Fermi successor in 2012-2015 starts to bring significant revenue. I guess that's what's keeping nVidia investing in those extra transistors
Since graphic card generations get obsolete every 18-24 months, nVidia would never recover the cost of designing Fermi, if it had to come from revenue from the professional segment during 2010-2011. The professional segment starting size is too small, and the life of graphic generations too short
Good try to distract investors on presentation day, but every professional investor could see through it next day when they were back at their offices and run the revenue spreadsheet projections