NVIDIA GPU Conference Live

I think they are headed in the right direction (at least for the company in general), but also taking a significant risk, only time will tell if it pays off...

QFT. AMD took a big risk with its line of GPUs when it stopped worrying about getting the biggest baddest chip out and instead focused on getting a scalable chip out, and that calculated risk paid off huge for them.

I don't know if the same will hold true for nVidia, but if this card comes out to be as a big success maybe we'll be reading a behind the scenes article on nVidia's change of philosophy up on Anandtech in a year.

Either way, I like to see big companies like this take big risks, because when they succeed we're usually all better for it.
 
Nvidia will not be 3 months late, more like 6 months late.
Considering the 5870 is just now being avaliable, oct,nov,dec,jan 1 is 3 months, feb 1 would be 4, march 1 would be 5 months, and april 1 is 6 months. I haven't heard ANYONE saying it will be Q2 2010.

Which is about the length of 75% of a whole product cycle.
new architecture cycles are 18 months. 6 months late is 33%, where as 3 months would be 16%. Please stop with the hyperboles.

No one said the world was ending. I'm looking at this situation as a consumer and a shareholder. If you follow Nvidia's stock it's like it's on crack, of which sometimes I think Jen Hsun Huang is on at times as well. (Remember he does tend to blame his best customers for design problems and offend the folks that make his chips.)
If you bought Nvidia's stock when it had a PE ratio of 20, you are a fool. If you bought immediatly following the "bumpgate" or bad bump material you are currently sitting on a profit of about 20% (12$ vs 14$). If you bought Nvidia stock in January you would have doubled your money.

What I'm hoping for is for Nvidia to salvage the situation and be realistic.
And exactly what do you want them to do? Even if god him self came down him and gave them the perfect chip design, you wouldn't be able to get it to market faster than the fermi.

It can continue to develop the next latest and greatest, but continue to have a somewhat healthy financial situation.
Umm they have a healthy financial situation. In the last 4 quaters they have turned a profit in 1 of them. Despite seeing one of the worst economic ressesions in history they have lost less than 400M, barely 20% of their reserves (they have no debt incase you didn't notice). Go have a look at AMD, I'll help you out they lost ~1.5B in the last 4Q and have over 5B in total debt. They also have a NEGATIVE TOTAL EQUITY. Yeah, keep telling me how Nvidia needs to get it's self into a "somewhat healthy financial situation".

No one realizes that they're still hurting from the whole G80 GPU laptop substrate bullshit.
??? That's why their stock is up from when it happened and we've been in a horrible recesion?

AMD went with an elegant chip design that was focused on the next MS api. It doesn't run CUDA or PhysX, however it will work very well with a piece of software that almost every modern computer will have installed on it. That's realistic.
They went with a brute force solution, not an elegant one. Nvidia on the other hand has made a GPU that is capable of being used in supercomputers. The super computer industry is easily as large as the gaming industry. Keep telling me how silly that are.
 
Oak Ridge National Laboratory Looks to NVIDIA “Fermi” Architecture For New Supercomputer
SANTA CLARA, Calif. —Sep. 30, 2009—Oak Ridge National Laboratory (ORNL) announced plans today for a new supercomputer that will use NVIDIA®’s next generation CUDA™ GPU architecture, codenamed “Fermi”. Used to pursue research in areas such as energy and climate change, ORNL’s supercomputer is expected to be 10-times more powerful than today’s fastest supercomputer.

Jeff Nichols, ORNL associate lab director for Computing and Computational Sciences, joined NVIDIA co-founder and CEO Jen-Hsun Huang on stage during his keynote at NVIDIA’s GPU Technology Conference. He told the audience of 1,400 researchers and developers that “Fermi” would enable substantial scientific breakthroughs that would be impossible without the new technology.

“This would be the first co-processing architecture that Oak Ridge has deployed for open science, and we are extremely excited about the opportunities it creates to solve huge scientific challenges,” Nichols said. “With the help of NVIDIA technology, Oak Ridge proposes to create a computing platform that will deliver exascale computing within ten years.”

ORNL also announced it will be creating the Hybrid Multicore Consortium. The goals of this consortium are to work with the developers of major scientific codes to prepare those applications to run on the next generation of supercomputers built using GPUs.

“The first two generations of the CUDA GPU architecture enabled NVIDIA to make real in-roads into the scientific computing space, delivering dramatic performance increases across a broad spectrum of applications,” said Bill Dally, chief scientist at NVIDIA. “The ‘Fermi’ architecture is a true engine of science and with the support of national research facilities such as ORNL, the possibilities are endless.”
 
Judging by the specs this could be a beast on most todays game, but I'm fairly sure it will come at price too
 
OMG Vengeance is yours Vengeance! Lolz! I shouldn't have F@cked with the Vengeance in my retort to your world is ending comment... Easy Vengeance... Easy... hehehe...

Look. I invested in Nvidia during the TNT days, and usually hold on to stock for like 6 or 7 years. Don't hate brother... As a fool of Nvidia stock buying and taking I was about to buy myself a Land Rover Defender in the late 90s, of which I still own and love. Yes seeing that automobile parked in my garage everyday reminds me of how good things were back then.

The errors that a company makes doesn't really show until 2-3 quarters later, all other fluctuations are typically speculative. My initial posts were about things I thought Nvidia should consider doing to stay competitive in the GPU market, more particularly the enthusiast/gamer market.

Look I'm allowed my opinon on this, just as you are with yours... Though you are right regarding the supercomputer industry that's simply out of our realm as hardware enthusiasts no?

From my perspective an elegant design is one that is simple and it's not about high transistor counts. It gets the job done efficiently and as I do respect your knowledge on things, you can't honestly tell me that in this case brute force (towards accelerating DX9, 10, 11) vs. introducing a whole bunch of new tech is extravagant (which is the opposite of elegant.)

Speculation here at [H] and almost everywhere else it seems points to real hardware coming out Q2 next year. I'm just making comments about what they should do between now and then.

A good CEO keeps his company in the black quarter after quarter. Though most folks hate on Apple hardware, you can't say that Nvidia stock has had the same kind of trend.

Discussion dude... Not Debate!!!
 
Last edited:
OMG Vengeance is yours Vengeance! Lolz! I shouldn't have F@cked with the Vengeance in my retort to your world is ending comment... Easy Vengeance... Easy... hehehe...
Are you mentally stable?

usually hold on to stock for like 6 or 7 years.
Yeah...... /sigh.

The errors that a company makes doesn't really show until 2-3 quarters later, all other fluctuations are typically speculative.
/sigh, so you think buying a stock and holding it for a long time isn't speculative? :confused:

My initial posts were about things I thought Nvidia should consider doing to stay competitive in the GPU market, more particularly the enthusiast/gamer market.
The only thing you've said is that Nvidia should magically cut their prices. Well I think GM should start selling corvettes for 20K, they'll sell a lot more of them! Hell I'll go buy 5!

Look I'm allowed my opinon on this, just as you are with yours...
Yep, and I'm allowed to have an opinion about your opinion. Does it hurt your feelings because some random guy on the internet doesn't agree with you?

From my perspective an elegant design is one that is simple and it's not about high transistor counts.
Why is low transistor count good? If someone else can build something for the same price, use the same power, but also do 10 other functions but it has more transistors, why is lower transistor more elegant?

Speculation here at [H] and almost everywhere else it seems points to real hardware coming out Q2 next year. I'm just making comments about what they should do between now and then.

HardOCP's 4850 review said:
We feel as though it will be mid-to-late Q1’10 before we see anything pop out of NVIDIA’s sleeve besides its arm. We are seeing rumors of a Q4’09 soft launch of next-gen parts, but no hardware till next year and NVIDIA has given us no reason to believe otherwise.
Are we reading the same article from hard?

A good CEO keeps his company in the black quarter after quarter. Though most folks hate on Apple hardware, you can't say that Nvidia stock has had the same kind of trend.
.... :facepalm:

Discussion dude... Not Debate!!!
I'm not allowed to point out where you are wrong or something?
 
Wild thought

maybe NV should make a cGPU for general purpose, and a GPU for gaming, perhaps much like their current Quadro/GeForce seperation, or could go even more low-level hardware on the seperation
 
@Vengeance.

Dude nobody's feelings are hurt, don't instigate... Don't want to get personal with you but you're a little too "head on" today no?

Look you're actually disseminating my posts to attack me... Lolz... I don't want to get "micro" with you but you're using nitpicks and technicalities.

In regards to your posts at my posts. I'm asking you to chill out... We're having a discussion here. Not a flamewar.

Don't kill the thread by turning it into an argument!
 
@Vengeance.

Dude nobody's feelings are hurt, don't instigate... Don't want to get personal with you but you're a little too "head on" today no?

Look you're actually disseminating my posts to attack me... Lolz... I don't want to get "micro" with you but you're using nitpicks and technicalities.

In regards to your posts at my posts. I'm asking you to chill out... We're having a discussion here. Not a flamewar.

Don't kill the thread by turning it into an argument!

So I shouldn't respond to what someone says? That's how you have a disscussion? wow...
 
Wild thought

maybe NV should make a cGPU for general purpose, and a GPU for gaming, perhaps much like their current Quadro/GeForce seperation, or could go even more low-level hardware on the seperation

Thank God... Back on topic.

I'm really hoping that Nvidia harnesses whatever advances it makes with Fermi and puts it into a focused chip for the gaming segment. The cGPU tech is amazing if it comes into fruition, Jen Hsun Huang can finally put his money where his mouth is when he declared the CPU dead a couple of quarters ago.

As I see it, the Fermi release is either a bad coincidence with the Radeon 5000 series launch or simply a ploy to steal thunder from ATI and keep loyal "green teamers" waiting until next year.

The problem with this is that everyone is going to buy what they need when they need it, and Nvidia doesn't have anything enticing for this holiday season. The pricing structure I suggested would allow Nvidia to stay competitive with regards to the 5000 series.
 
Wild thought

maybe NV should make a cGPU for general purpose, and a GPU for gaming, perhaps much like their current Quadro/GeForce seperation, or could go even more low-level hardware on the seperation

It depends really. cGPU is an interesting concept. Nvidia haven't been much clear yet of what us regular consumers can use it for. I would guess, from what I've read regarding this like here, that most major GPGPU programs will be on more hardware agnostic standards like opencl and dx11 where the audience is larger. Perhaps professionals would enjoy more if they developed plugins for tools and such, but I've seen little yet of what it can give us consumers. I'd wish that Nvidia would explain more why we should care about this.

I already have a CPU and a GPU. Nvidia brought them together as we've seen mostly talks about concerning Larrabee and Bulldozer. To avoid any talk about other companies: what makes this better, then lets say having an Intel I7 and a GTX285 as standalone CPU and GPU?

Edit: Quote from the above link to make life easier for readers:
Now that standards-based alternatives such as OpenCL exist, CUDA is likely to see slower uptake. Many customers have learned to avoid solutions from a single source, IBM for example with Intel’s x86 chips in the original PC. But CUDA will retain strategic importance to Nvidia as a way to set the pace for OpenCL and DirectCompute.
 
Ray-tracing is a simple technique, not an API or anything hardware-specific. It actually doesn't even require GPU computing -- it can be done with earlier hardware. However, GPUs in the past are terribly inefficient at doing ray-tracing.

Again, I have to point out that my system has 4 cores (8 virtual cores) ... Only perhaps one of them at a time hit 100% and most sit at 5%.

We are talking about adding yet another "processor" to the mix when the ones I have aren't even being challenged as it is.

People keep saying, "We need to off load X to the GPU so the CPU can do other things" ... well ... what are those other things?

I know frequency isn't a good guage of things, but with CPU's pumping out 3 - 5x the frequency of GPU's ... they can afford to be less efficient and still provide as good, if not better, performance then even current day GPU's.

That was why I was so exited about the purchase of ATI by AMD and the hints of a fusion processor. I was expecting them to have multiple cores that could be either used for both graphics and as a CPU with a switch of a bit and function at the full frequency of the system and without the limitation of a PCI-E bus.

Ah well ... It just seems to me that people are too focused on the GPU for things and completely forget that the CPU is just sitting there idle waiting for something to do most of the time.
 
NVIDIA Fermi: Architecture discussion and pre-launch GF100 speculation
The industry is littered with failures trying to get into the CPU space. There was once a thriving industry in the workstation/server business prior to the rise of commodity linux. You had SPARC, you had PA-RISC, you have MIPS, you had PPC, you have Alpha, you had 68k. These were once protected islands until free BSD variants and Linux arrived. Now most are in the dead pool, including Sun. Only IBM has survived, just barely.

There are only two areas where recent success has occurred -- consoles and mobile/low power markets, but even there you see Intel is making an assault.

Competitors could have commodified x86 if not for the simple fact that the fabs are incredibly expensive, and therefore the barrier to entry is now so high, you're unlikely to see new challengers. We're lucky AMD is still hanging on. Intel's biggest threat probably comes from a mainland Chinese competitor in the next decade.

If Nvidia plans to go up against Intel, not having x86 compatibility is a non-starter. And even if they did plan to go that route, they'd still face the fact that ultimately, they can't bet their entire future on TSMC while Intel and AMD control their own process.

So IMHO, a long term strategy of trying to beat Intel at their own game is a failure strategy. IMHO, the real strategy is to de-emphasize traditional processing, which is already hitting limits. Your web browser or Microsoft Office won't run much faster on superduper x86 chips. Rather, the kinds of workloads that will stress desktop systems are inherently media/parallel tasks anyway.

We're looking at hitting the limits of process downsizing in the coming decade anyway, the only way to scale is parallelism, so long term, NVidia's strategy should be to look at transitioning developers to a new model, rather than adapt their existing hardware to the old. And I think you're seeing them do that, especially with the new developer tools they have coming out.

They just can't do it too quickly, but graphics still has to fund this transition.

Honestly, I have a hard time seeing anyone challenging Intel in the future, except perhaps state-subsidized players in Asia or maybe large Japanese oligarchies. As we get closer to fundamental limits, costs are going up so high, that very few entities can raise the kind of capital, and wait a long time for ROI, to meet future needs.

As great as the R8xx/GF1xx are, the reality is, Intel is a very large company with lots of smart people, and lots of money, and other market position advantages that makes it very hard to unseat them, and if need be, they can put enough resources on a threatening GPGPU killer. Even with a large clusterfuck like Prescott, AMD was only able to bite off a small niche.
 
Again, I have to point out that my system has 4 cores (8 virtual cores) ... Only perhaps one of them at a time hit 100% and most sit at 5%.

We are talking about adding yet another "processor" to the mix when the ones I have aren't even being challenged as it is.

People keep saying, "We need to off load X to the GPU so the CPU can do other things" ... well ... what are those other things?

I know frequency isn't a good guage of things, but with CPU's pumping out 3 - 5x the frequency of GPU's ... they can afford to be less efficient and still provide as good, if not better, performance then even current day GPU's.

That was why I was so exited about the purchase of ATI by AMD and the hints of a fusion processor. I was expecting them to have multiple cores that could be either used for both graphics and as a CPU with a switch of a bit and function at the full frequency of the system and without the limitation of a PCI-E bus.

Ah well ... It just seems to me that people are too focused on the GPU for things and completely forget that the CPU is just sitting there idle waiting for something to do most of the time.
Your CPU can do ~60 to 70Gflops. A GPU like the 5870 can do ~2Tflops. Yes, you aren't reading that wrong, that is about 40 times as fast. The CPU is the undisputed king of doing single threaded operations incredibly fast. For example if you have code like this:

for i =1 to 100,000
p(i) = m(i)*n(i)
next
Your cpu will take 100,000 cycles to do it. This is how most programs are written.

But you don't need to know p(i-1) to calculate p(i) right? If instead we tell the first core to do i=2 and teh second core to do =2... core 8 => i=8, and then core 1 starts over with i=9 we can do the work in 1/8th the time! GPUs do this because you can caluclate a whole frame at once and do it on the order of hundreds of cores.

But your question comes back down to exactly what do we need this extra power for! Well, if you are opening web pages and writting word documents you don't. If you are generating graphics for a game you are already using it. When was the last time you decoded a dvd or encoded a video, none of those your i7 is doing instantly is it? And that is just on an enthusaist level.

When you get into the comercial side, you have all kinds of possibilites. For example, FEA and CFD (finite element analysis and computational fluid dynamics), i.e those pretty colored pictures that show you stream lines of air flowing over a car, are incredibly computationaly intensive. I spend a lot of time working really hard to simplify my models so I can get something that will run in an acceptable amount of time (like less than a day in some cases). Being able to run simulations with that detail level in a matter of minutes is a wet dream.

Move on to fields like science and medicine and you get the folding at home project. That project has millions of computer years invested and there is still lots of work to be done!
 

Yeah, others have mentioned the glaring problem of Nvidia being the only one of the big 3 without an x86 license. Even if they wanted to get one, the only way to do that would be to sue Intel for one (thats how AMD and VIA got theres), and who knows how long that would take to pan out. Besides the fact that this prevents Nvidia from having a complete platform as well as a CPU+GPU hybrid (which Intel and AMD both already have), this will also be a really big advantage Larrabee will have against Nvidia's new architecture, regardless of how powerful it is by comparison. Nvidia cozying up to (or becoming a major shareholder of) VIA would help, but I don't think they take them seriously enough to bother (beyond being a bargaining chip for something else anyway).
 
Your CPU can do ~60 to 70Gflops. A GPU like the 5870 can do ~2Tflops. Yes, you aren't reading that wrong, that is about 40 times as fast. The CPU is the undisputed king of doing single threaded operations incredibly fast. For example if you have code like this:

for i =1 to 100,000
p(i) = m(i)*n(i)
next
Your cpu will take 100,000 cycles to do it. This is how most programs are written.

But you don't need to know p(i-1) to calculate p(i) right? If instead we tell the first core to do i=2 and teh second core to do =2... core 8 => i=8, and then core 1 starts over with i=9 we can do the work in 1/8th the time! GPUs do this because you can caluclate a whole frame at once and do it on the order of hundreds of cores.

But your question comes back down to exactly what do we need this extra power for! Well, if you are opening web pages and writting word documents you don't. If you are generating graphics for a game you are already using it. When was the last time you decoded a dvd or encoded a video, none of those your i7 is doing instantly is it? And that is just on an enthusaist level.

When you get into the comercial side, you have all kinds of possibilites. For example, FEA and CFD (finite element analysis and computational fluid dynamics), i.e those pretty colored pictures that show you stream lines of air flowing over a car, are incredibly computationaly intensive. I spend a lot of time working really hard to simplify my models so I can get something that will run in an acceptable amount of time (like less than a day in some cases). Being able to run simulations with that detail level in a matter of minutes is a wet dream.

Move on to fields like science and medicine and you get the folding at home project. That project has millions of computer years invested and there is still lots of work to be done!

Yea I work with effects and fluid simulations, and just thinking about these new cards from nvidia is making me wish they would come out sooner.
 
Your CPU can do ~60 to 70Gflops. A GPU like the 5870 can do ~2Tflops. Yes, you aren't reading that wrong, that is about 40 times as fast. The CPU is the undisputed king of doing single threaded operations incredibly fast.

But, what makes a combined CPU/GPU better then a lets say single I7 with a GTX285? In a GPGPU enviroment like Opencl as example? Parallell processes would in both cases be run on the GPU and the CPU (I7) is currently the fastest in the world for its tasks.
 
Someone said recently "that were in the mist of a revolution in the graphics industry." Well we are! I feel like my Girlfriend "Nvidia" has broken up with me and I dont know what to do, although it does feel good cause I feel there is something better on the horizon.

As the other thought is right now, Nvidia is moving beyond Games and more of a broad market platform that is extremely powerful not only for applications like CAD and Folding but also for competition against Intel and they are coming on Hard people.
 
But, what makes a combined CPU/GPU better then a lets say single I7 with a GTX285? In a GPGPU enviroment like Opencl as example? Parallell processes would in both cases be run on the GPU and the CPU (I7) is currently the fastest in the world for its tasks.

Frankly, I think they should stay seperate. CPUs should continue to run single threaded problems (which does NOT mean they are simple) and GPUs should pick up all the tasks that are mulitthread friendly. I see no real benefits to combining them.
 
Someone said recently "that were in the mist of a revolution in the graphics industry." Well we are! I feel like my Girlfriend "Nvidia" has broken up with me and I dont know what to do, although it does feel good cause I feel there is something better on the horizon.

As the other thought is right now, Nvidia is moving beyond Games and more of a broad market platform that is extremely powerful not only for applications like CAD and Folding but also for competition against Intel and they are coming on Hard people.

No offence, but this sounds like some marketing slide. Broad market platform like this is OpenCL and DX11 (which has full support of all hardware including the biggest GPU maker in the world, Intel). Nvidia are coming with a cGPU, but what does it bring extra? CUDA is just software and they could just as well implemented the same on a standalone CPU and GPU (pooling resources is something opencl does as well).

This is what I am missing in all this. What extra do the cGPU bring us? We have new hardware, give us the juicy details of what the hardware brings of new functions that couldn't be done on a standalone CPU and GPU. :)

And where we can "boldy go, where no man has gone before" :D
 
Frankly, I think they should stay seperate. CPUs should continue to run single threaded problems (which does NOT mean they are simple) and GPUs should pick up all the tasks that are mulitthread friendly. I see no real benefits to combining them.

Thanks! I have problems myself understanding this, though I have found it very interesting this new path. I really want to know if this can offer something more. :) Perhaps Nvidia have something more stored that they haven't revieled yet regarding this? Could the CPU in the cGPU act as some sort of coprocessor? That would have been interesting.
 
Thanks! I have problems myself understanding this, though I have found it very interesting this new path. I really want to know if this can offer something more. :) Perhaps Nvidia have something more stored that they haven't revieled yet regarding this? Could the CPU in the cGPU act as some sort of coprocessor? That would have been interesting.

The biggest part is this so called "PTX 2.0" which incorporates a unified address space and 32 bit double precision. The result is they can now run all kinds of code (c,c++,fortran,etc) with ease. It is going to make developing on your new GTX 380 as easy as developing code for your i7. There are some more architectural changes such as going from multiply add to "fused multiply add" which should speed up some operations as well. We'll have to wait for some real world numbers to know exactly how much.

Given the numbers they have been throwing around, this thing should be equal to a 5870 without much trouble. I don't know enough about the architecture and graphics rendering to know how much of this will directly translate over to improvements in gaming graphics. I do know that gaming graphics isn't a simple task and there is a chance for a real monster to be born here. We'll just have to wait and see.:(
 
The biggest part is this so called "PTX 2.0" which incorporates a unified address space and 32 bit double precision. The result is they can now run all kinds of code (c,c++,fortran,etc) with ease. It is going to make developing on your new GTX 380 as easy as developing code for your i7. There are some more architectural changes such as going from multiply add to "fused multiply add" which should speed up some operations as well. We'll have to wait for some real world numbers to know exactly how much.

Real world number we need anyway, but this was informing.:)

Will it be possible to run code thats programmed in C++ with the intent to use in on CPU on the gf100, or do the code need to have spesific CUDA instructions in it? I'm thinking about existing programs.

And, what about older generations? Will they also get this, or is it spesifically for CUDA on GTX 380 and not CUDA on older gens?
 
Real world number we need anyway, but this was informing.:)

Will it be possible to run code thats programmed in C++ with the intent to use in on CPU on the gf100, or do the code need to have spesific CUDA instructions in it? I'm thinking about existing programs.

And, what about older generations? Will they also get this, or is it spesifically for CUDA on GTX 380 and not CUDA on older gens?

It will have to still be called to the GPU but the "calling" will be much simpler and A LOT more efficient.

I think a lot of people think that CUDA is a language of itself, where it really isnt, more of a wrapper of sorts that contains a lot of normal programing languages. This is just going to make it that much more transparent.

Anandtech has a nice writeup on it.
 
NVIDIA has been pushing very efficient designs, in which the architecture can maintain its real world performance, very close to its theoretical FLOPS capacity. Fermi takes this to a new level it seems. Some even speculate that GF100 doesn't have ROPS or TMUs and that the ALUs will rely on the L1 and L2 caches to do the ROP operations. This is truly a new vision for what a GPU can be. It's Computing capabilities seem to be off the chart. Now let's wait to see what the benefits of this architecture are, for graphics.

Well put...

If you ask me, I think people find themselves wishing their Monster GPU was good at more things than just games, and Folding...

Some want more of a do-all GPU, that might even save you from upgrading your mobo and CPU so often. Just snap in a new GPU with up to 6GB, and your good to go. :)
 
Last edited:
Still think its a waste to have what is basically "another" CPU core on a PCI-E bus. In our labs we have tried this technology and the 250 MB/s that we can pull data off the cards just is not useful. Sure we could push at incredible speeds to the card, and sure it can do matrices at incredible speed ... but offloading that data is painfully slow to anything other then the video output.

It would be far greater to have both single thread and parallel processing cores in a single CPU chip that could run at full system frequency. It reminds me of the add on FPU chips of the 80s and 90s ... once they were all integrated within the CPU they completely disappeared. This is where I think Intel and AMD have the advantage over nVidia as I think this is the direction both CPU manufactures are heading.
 
It would be far greater to have both single thread and parallel processing cores in a single CPU chip that could run at full system frequency. It reminds me of the add on FPU chips of the 80s and 90s ... once they were all integrated within the CPU they completely disappeared. This is where I think Intel and AMD have the advantage over nVidia as I think this is the direction both CPU manufactures are heading.

I agree here! That is why Nvidia is going hard after Intel! I dont know when AMD will mix its cake and cheese, but they are always behind Intel.

If Nvidia really felt it had some ground to make up verus AMD than they should have ran Crysis at the show on some insane specs to get people ooggling! We as the end user are limited to the companies and how they program their software. I know I keep saying it, but I will bang this drum away.
 
Still think its a waste to have what is basically "another" CPU core on a PCI-E bus. In our labs we have tried this technology and the 250 MB/s that we can pull data off the cards just is not useful. Sure we could push at incredible speeds to the card, and sure it can do matrices at incredible speed ... but offloading that data is painfully slow to anything other then the video output.

It would be far greater to have both single thread and parallel processing cores in a single CPU chip that could run at full system frequency. It reminds me of the add on FPU chips of the 80s and 90s ... once they were all integrated within the CPU they completely disappeared. This is where I think Intel and AMD have the advantage over nVidia as I think this is the direction both CPU manufactures are heading.

NVidia dont make socketed CPU's so Intel arent out of work... yet :)
This could go round in circles as a graphics card could become a complete PC or even a console.
Video cards even have sound these days! (Hopefully GT380 will have Hi Def audio with a Protected Audio Path - PAP)
The cards PCB would become a motherboard and then we could get socketed GPUs etc...
lol.

It makes absolute sense to add CPU functions to a streaming GPU for the same reason Intel are making Larabee.
This results in reduced latency for PhysX graphics processing on GPU, real time Physics simulations on GPU and allows fast + accurate collision detection on GPU as all location/geometry data is present in local memory - and the CPU + PCI-E bus are no longer required while processing.
This should resolve the main complaints I hear about Physics in games, namely PhysX is too slow and PhysX is not being used for Physics simulation.
A great move!
 
Originally Posted by Nenu
NVidia dont make socketed CPU's so Intel arent out of work... yet
This could go round in circles as a graphics card could become a complete PC or even a console.
Video cards even have sound these days! (Hopefully GT380 will have Hi Def audio with a Protected Audio Path - PAP)
The cards PCB would become a motherboard and then we could get socketed GPUs etc...
lol.

It makes absolute sense to add CPU functions to a streaming GPU for the same reason Intel are making Larabee.
This results in reduced latency for PhysX graphics processing on GPU, real time Physics simulations on GPU and allows fast + accurate collision detection on GPU as all location/geometry data is present in local memory - and the CPU + PCI-E bus are no longer required while processing.
This should resolve the main complaints I hear about Physics in games, namely PhysX is too slow and PhysX is not being used for Physics simulation.
A great move!

An intergrated platform! I do like Nenu is saying and believe it is correct. It will be interesting to see if it all comes to light. Does Intel though have anything on Nvidia? Or does Intel have to "play check-up?"
 
An intergrated platform! I do like Nenu is saying and believe it is correct. It will be interesting to see if it all comes to light. Does Intel though have anything on Nvidia? Or does Intel have to "play check-up?"

Actually , Intel does have to play catch-up in the graphics side of the equation ... but AMD is in the best position if they could ever get their graphics team and cpu team together to churn something out. Seriously, I can't believe they havn't gotten anywhere on their fusion CPU yet.
 
Back
Top