Life-long nVidia user debating going AMD

Rehevkor

[H]ard|Gawd
Joined
Jan 2, 2003
Messages
1,155
I've always used nVidia cards, not out of any fanboyish loyalty, but because my upgrades always seem to come at times when nVidia's cards were getting the best performance. Until now, anyway.

nVidia hasn't put out a new chip in a while. AMD's DX11 hardware is just around the corner, whereas nVidia is doing their renaming thing again and new hardware could be a long time coming.

I have an 8800GTX right now (as shown in my sig below). The Batman demo and Resident Evil 5 benchmark are showing me that I can still crank the settings at 1080p, but the framerate is starting to take a more significant hit with these new games. I want to upgrade soon, but for the first time I'm not sure what to get.

The HD5870 looks promising. I haven't seen any detailed specs, but new DX11 hardware at $299 is certainly tempting. I probably wouldn't hesistate to go with AMD, except for the loss of PhysX hardware support. I liked the effects in Mirror's Edge and the stuff in Batman (volumetric, physics reactive fog, tiles that break realistically, cool large-scale cloth simulation, etc) is pretty awesome. I imagine all of this runs like a crippled dog without the GPU acceleration though.

So here are the options I'm considering:

  1. Ignore the DX11 features and upgrade to a GTX 285 now
  2. Ignore PhysX features and upgrade to a HD5870 when available
  3. Ignore performance and DX11 for now and wait for nVidia's DX11 hardware
What do my fellow [H]'ers think? What's the smart move for someone looking at a video card ugprade right now?
 
Last edited:
I would say that your CPU is probably going to be your bottleneck if you get a new video card.

Oveclock that sucker and let it do what it really wants to do.

As for video cards, you can always use the 8800GTX for PhysX and use a new ATI card for rendering.

Even though Nvidia has broken PhysX with the newer drivers on systems that have and ATI card in them, you can still use the 186.18 driver set with the 185.68 nvapi.dll files to let you use PhysX on the Nvidia card along side the ATI card.
 
Unless you need the card right now, wait and see. Time is always your friend when it comes to technology.
 
I'm slowly clawing my way back to sorta being on the edges of the loop, can you not run an ati card with a secondary nvidia card for physx?
 
I'm slowly clawing my way back to sorta being on the edges of the loop, can you not run an ati card with a secondary nvidia card for physx?

Not with the newer nVidia drivers. Besides, my motherboard isn't SLI, and I don't have any plans right now to rebuild the whole thing. Just looking at the video card.

I would say that your CPU is probably going to be your bottleneck if you get a new video card.

Oveclock that sucker and let it do what it really wants to do.

It's been a looooong time since I overclocked a CPU. Would I be able to push it far enough on air cooling to actually make a difference in gameplay performance? I'm not interested in OCing for 3dmark scores and bragging rights, so I would only consider it if I would see a respectable bump in framerate during gameplay.
 
Why the rush to get a card with dx11? Its going to be awhile before there is even a single game that will use it. How many games out there right now even use dx10?

I would say wait and see what nvidia has in store.
 
Why the rush to get a card with dx11? Its going to be awhile before there is even a single game that will use it. How many games out there right now even use dx10?

I would say wait and see what nvidia has in store.

I guess it isn't just for DX11, but also getting a newer, faster card in general. Plus I don't like having graphics options that I can't turn on. :) I know that DiRT 2 at least will have DX11 support, and I am looking forward to the game itself.
 
Well I basically have the same PC as you for another few days, I did have a GTX260 for a little but I just sold it to fund my i7.

I also have a smaller monitor 1680x1050 and my spare card is 8800GT not GTX. Anyway I don't play either game you mentioned but am still a bit surprised you find the 8800GTX fast enough for physics and graphics on these newer games. I got to experience physics while I had my GTX260 though and for me they really don't seem that great but again I am not playing the same titles.

So I sold my GTX260 to fund the i7 build and I am going to wait on the HD5870 before I decide what new card to get. I expect it will be a month or so before we see the 5870 for sale in retail outlets but we should have some performance evaluations to aid our decision before then. I also expect the initial performance numbers to have some room to grow as drivers improve.

Anyway I guess my point is everyone will just be guessing until the reviews of the HD5870 are out.

So I say go with option 4. Play your games and enjoy them now. Then when we have some actual numbers on the 5870 start evaluating option 1 through 3.
 
Last edited:
Not with the newer nVidia drivers. Besides, my motherboard isn't SLI, and I don't have any plans right now to rebuild the whole thing. Just looking at the video card.
You can still use the older drivers just fine.
It's been a looooong time since I overclocked a CPU. Would I be able to push it far enough on air cooling to actually make a difference in gameplay performance? I'm not interested in OCing for 3dmark scores and bragging rights, so I would only consider it if I would see a respectable bump in framerate during gameplay.
You can overclock a Q6600 to 3.0GHz easily, and that would definitely give a boost in games.

I think getting a GTX 285 is a poor choice, especially now that the 5870 is coming out, offering much more performance for a lower MSRP. PhysX is a gimmick that NVIDIA only has so much cash to pay companies to use. For example, all the effects in Batman are easily reproducible through software and have been for years; the game was simply cannibalized for non-PhysX hardware at pressure from NVIDIA (hence, I refuse to buy the game and support this crap). I'm looking at getting a 5870 as well, it looks to be a promising piece of hardware, but reviews will tell. If I were you, I would wait a few more weeks to see what the performance of the 5870 is and how it will affect market prices.
 
Based on these replies it sounds like it isn't just me; there really isn't an obvious choice this time, so I guess the smart move is to wait until there is. Maybe I'll scale my resolution in the newer games back to 720p for now, that way I can bump it up when I finally do upgrade and replay them.

@Mr. K6: I'll take a stab at hitting 3.0ghz then. I already have prime95 installed, so I just need to figure out how overclocking works with this board. Any tips for what I should try bumping up first? FSB, voltage, core multiplier?
 
I had my q6600 (g0) running on stock voltages @ 3.6 on that board. Going to 3.4 was a simple task - I kept it there 24x7 at 1.3 volts

First off, you have aftermarket cooling, right? If not you can get an inexpensive OCZ-Vendetta 2 or Xigmatek HDT-S1283 for $25 if you look around.

Your multiplier is locked, so you are stuck there, make sure it is maxed out, set your CPU voltage to 1.35 and your DRAM voltage to whatever your sticks call for (prolly 2.1), drop your ram multi down as low as it will go for now.

Then, start upping the FSB, you can jump right up to 3.0 without even thinking about it, from there make smaller jumps, testing as you go. I have not heard of anyone getting less than 3.2 on stock voltage.. Once you find your max, comfortable, speed you can start bumping up your RAM speed.

EDIT: for your video card issue, you do not need SLI to run PhysX - You can leave your GTX card in one PCI-E slot and the ATI card in the other , load the older PhysX drivers along with the new ATI drivers. I have not personally tried it, but it is reported to be working.
 
I had my q6600 (g0) running on stock voltages @ 3.6 on that board. Going to 3.4 was a simple task - I kept it there 24x7 at 1.3 volts

First off, you have aftermarket cooling, right? If not you can get an inexpensive OCZ-Vendetta 2 or Xigmatek HDT-S1283 for $25 if you look around.

I have a Dynatron P985, slightly better than stock I guess. I hadn't planned on overclocking when I bought it, heh.

Your multiplier is locked, so you are stuck there, make sure it is maxed out, set your CPU voltage to 1.35 and your DRAM voltage to whatever your sticks call for (prolly 2.1), drop your ram multi down as low as it will go for now.

Then, start upping the FSB, you can jump right up to 3.0 without even thinking about it, from there make smaller jumps, testing as you go. I have not heard of anyone getting less than 3.2 on stock voltage.. Once you find your max, comfortable, speed you can start bumping up your RAM speed.

EDIT: for your video card issue, you do not need SLI to run PhysX - You can leave your GTX card in one PCI-E slot and the ATI card in the other , load the older PhysX drivers along with the new ATI drivers. I have not personally tried it, but it is reported to be working.

Ah, I hadn't realized you could use any PCI-E slot. I assumed that it had to be an SLI-capable board. I'll keep that in mind.

Thanks for the info!
 
It looks as if the HD5870 isn't going retail until Oct, so you're stuck waiting anyway.
Once actual reviews are out, you'll have a better idea on what to upgrade with.

I waited for the R600, instead of getting a 8800GTX at release. I went SLI'd 8800GTX once I saw what the R600 offered.
 
I have a Dynatron P985, slightly better than stock I guess. I hadn't planned on overclocking when I bought it, heh.

I think that cooler is bolted through the board, if that's the case it's a simple task to swap yours out. Again, you don't need a super high end cooler, the two I mentioned give excellent bang/buck. You might even be able to re-use your screws/springs.
 
If you want to get the most out of this generation's cards, much less next gen, you're going to need to overclock your processor. I've read a few articles over the last year or so, and it seems that a quad-core, even the older Intel ones, need to be put at around 3.0 - 3.2GHz. I personally saw my GPU utilization go up 7-10% when I took my CPU from stock to 3.4. After that, my videocard was the biggest bottleneck, along with my mechanical OS hdd.
 
BTW I also agree with the others that think your going to need more CPU power to really take advantage of a high end card,

However I want to point out that I to have a q6600 and with out making changes to my configuration that would cost money I can't get it to overclock worth a damn. I have a early Q6600 from when they first came out not sure if I got a dud or if the ones released after had better overclocking potential. I just know my setup wont even stabilize at 3ghz.
 
Do you guys really think we've come to the point that a Q6600 isn't enough for a high-end card anymore?

I guess that makes me feel better about my recent i7 920 purchase, but I would think a Q6600, even at stock, is a great CPU that should easily posit the GPU as the bottleneck in any intensive graphics.

My personal recommendation would be to overclock that Q6600 a little to give you some more headroom if a Q6600 at stock is indeed not enough, and then wait for AMD's new offerings and then make a decision from there.
 
I think that cooler is bolted through the board, if that's the case it's a simple task to swap yours out. Again, you don't need a super high end cooler, the two I mentioned give excellent bang/buck. You might even be able to re-use your screws/springs.

Yes, my existing cooler does have a retention bracket installed. So I should be able to use it with a new cooler like the Xigmatek you mentioned, right? I really don't want to take everything out of the case just to mount a retention bracket for a new cooler.
 
Yes, my existing cooler does have a retention bracket installed. So I should be able to use it with a new cooler like the Xigmatek you mentioned, right? I really don't want to take everything out of the case just to mount a retention bracket for a new cooler.

Right on, the bolt pattern will be the same and the back bracket will work fine. I dunno if the screws (bolts) from your old cooler will work with the new one (they may be too long or too short) , odds are they will be just fine (I have never had a problem swapping them)
 
Do you guys really think we've come to the point that a Q6600 isn't enough for a high-end card anymore?

I guess that makes me feel better about my recent i7 920 purchase, but I would think a Q6600, even at stock, is a great CPU that should easily posit the GPU as the bottleneck in any intensive graphics.

My personal recommendation would be to overclock that Q6600 a little to give you some more headroom if a Q6600 at stock is indeed not enough, and then wait for AMD's new offerings and then make a decision from there.

Q6600 at 3.88Ghz isn't even enough for WoW in some places with a "lowly" HD4850 running at 740/1150.

A lot of games don't need that kind of CPU power, but they are out there.
 
Q6600 at 3.88Ghz isn't even enough for WoW in some places with a "lowly" HD4850 running at 740/1150.

A lot of games don't need that kind of CPU power, but they are out there.

WoW is a bad example. It's an old engine that Blizzard keeps cramming higher and higher poly-counts into, which it can't handle efficiently. The core rendering engine there is probably almost the same as when the game launched. There are better looking games that run better than WoW on older hardware.
 
I can understand a Q6600 choking on games like Supreme Commander, but WoW seems more inefficient rather than utilizing all the processor's cores to become a bottleneck, IMO.
 
I am thinking the same thing Rehevkor, however im using GTX 260 now and play Age of Conan. That is a demanding game for the videocard but the reason behind it is that if the 5870 (or 5890?) is a good deal faster then the 4xxx series and less noisy im thinking of buying the new AMD videocard as well.

Less demanding on the powersupply as well , not that it is a primary issue but it certainly makes a difference when you are gaming for 3 to 4 hours .

Im still unsure about the whole DX11 thing for gaming but for other uses looks cool , being able to lend gpu power for operations without needing extra software sounds like they finally did something right for a change...
 
Yeah, I'll be switching to Ati bandwagon too. My pair of 280s just take too much juice from wall, so going to switch them to pair of 5870s, when they show up. PhysX is not a big deal, as there is only one game using it which does not suck - Batman :). And I can live without seeing paper moving on the floor or extra fog :)

To tell the truth if the performance of 5870 will not be as impressive as it is heralded, I'll just buy me pair of 4890s. Dx11 is no big deal for me, as no game released in upcoming 12 months which I want to play has DX11 implemented. I don't like driving games, so Dirt2 is no buy for me... and all those I want - like Mass Effect 2, Dragon Age or Blizzard titles won't be with dx11. I'll just propably end with pair of Vapor-X 4890s, and when dx11 games appear in a year or two, I'll switch to 6870 :)
 
You can still use the older drivers just fine.

You can overclock a Q6600 to 3.0GHz easily, and that would definitely give a boost in games.

I think getting a GTX 285 is a poor choice, especially now that the 5870 is coming out, offering much more performance for a lower MSRP. PhysX is a gimmick that NVIDIA only has so much cash to pay companies to use. For example, all the effects in Batman are easily reproducible through software and have been for years; the game was simply cannibalized for non-PhysX hardware at pressure from NVIDIA (hence, I refuse to buy the game and support this crap). I'm looking at getting a 5870 as well, it looks to be a promising piece of hardware, but reviews will tell. If I were you, I would wait a few more weeks to see what the performance of the 5870 is and how it will affect market prices.

Sorry guy, calling you out on your lack of knowledge here. Algorithmically hundreds of thousands of particles, cloth, and fluids can't be done efficiently on the CPU. That's why trying to run the same effects on the CPU results in extremely low framerates.

It's also not something you can just break up into multiple independent CPU threads and avoid synchronization with. You see a lot of talk of rigid-body physics on the CPU, occupying 8 threads, but this because the algorithm maps well to the CPU. You can't just throw all the stuff that PhysX is doing today at a CPU and expect more than a few FPS, especially not in the quantities we're seeing. CPU is not good at hundreds of thousands of particles. It's not good at simulating fluids. It's not good at calculating cloth. Just accept the facts and stop with the conspiracy theory.
 
Yeah, I'll be switching to Ati bandwagon too. My pair of 280s just take too much juice from wall, so going to switch them to pair of 5870s, when they show up. PhysX is not a big deal, as there is only one game using it which does not suck - Batman :). And I can live without seeing paper moving on the floor or extra fog :)

To tell the truth if the performance of 5870 will not be as impressive as it is heralded, I'll just buy me pair of 4890s. Dx11 is no big deal for me, as no game released in upcoming 12 months which I want to play has DX11 implemented. I don't like driving games, so Dirt2 is no buy for me... and all those I want - like Mass Effect 2, Dragon Age or Blizzard titles won't be with dx11. I'll just propably end with pair of Vapor-X 4890s, and when dx11 games appear in a year or two, I'll switch to 6870 :)

The GTX2xx series draw very little power (less than AMD) when idle, which is probably how your pc spends most of its time...
 
The GTX2xx series draw very little power (less than AMD) when idle, which is probably how your pc spends most of its time...

Actually my PC for most of the time is used for games / video... about 3-4 hours a day. When I'm not playing/browsing/movies, I just switch it off, no need to have computer powered up when I'm out of home.
 
Sorry guy, calling you out on your lack of knowledge here. Algorithmically hundreds of thousands of particles, cloth, and fluids can't be done efficiently on the CPU. That's why trying to run the same effects on the CPU results in extremely low framerates.

It's also not something you can just break up into multiple independent CPU threads and avoid synchronization with. You see a lot of talk of rigid-body physics on the CPU, occupying 8 threads, but this because the algorithm maps well to the CPU. You can't just throw all the stuff that PhysX is doing today at a CPU and expect more than a few FPS, especially not in the quantities we're seeing. CPU is not good at hundreds of thousands of particles. It's not good at simulating fluids. It's not good at calculating cloth. Just accept the facts and stop with the conspiracy theory.

I'm sorry to say but you're wrong. The VELOCITY® Physics engine can process the following:

* Advanced collision system maintains thousands of simulating objects on next-gen hardware
* Advanced dynamic destruction for scenery & environmental objects
* Accurate vehicle driving dynamics
* Ultra-real human body physics with anatomical joint constraints and simulated muscles/tendons
* Advanced hair and cloth simulation for actors

Cloth and Particles can be done efficiently on a CPU (and are not done efficiently on a GPU). Bump up your resolution and eye candy features with PhysX and you get a slide show (even with a dedicated card). Do the same with the Velocity engine and your graphics card is still the limiting factor. The CPU can easily handle the physical interactions.

The future of Physics will be both the CPU and GPU. A merger of the two (OpenCL). But with CPUs increasing in the amount of Cores contained within and Graphics becoming more and more complex... I see the CPU as being the place for Physics in the near future.
 
Just a side note, the latest rumors(BSD) put the 5870 at 399$ not 299$. Nvidia's hardware is rumored to be out in early december which puts it <3 months from ATIs hardware. If it were me, I'd bite the bullet and wait to see what Nvidia shows up with. If you really want something faster now, why not pick up a used 260 for ~100-125$ to hold you over.
 
As for video cards, you can always use the 8800GTX for PhysX and use a new ATI card for rendering.

Even though Nvidia has broken PhysX with the newer drivers on systems that have and ATI card in them, you can still use the 186.18 driver set with the 185.68 nvapi.dll files to let you use PhysX on the Nvidia card along side the ATI card.

I believe that the new nvidia drivers won't allow this. So you need to go entirely nvidia to keep PhysX. If I were OP and my framerates weren't suffering in the game that I currently play, I'd wait until both ATI and Nvidia's new products are both out before I pick. Besides competition tends to lower prices.
 
I believe that the new nvidia drivers won't allow this. So you need to go entirely nvidia to keep PhysX. If I were OP and my framerates weren't suffering in the game that I currently play, I'd wait until both ATI and Nvidia's new products are both out before I pick. Besides competition tends to lower prices.

True, you have to run the pre-190 drivers - it works on XP and W7


EDIT

Just a side note, the latest rumors(BSD) put the 5870 at 399$ not 299$. Nvidia's hardware is rumored to be out in early december which puts it <3 months from ATIs hardware. If it were me, I'd bite the bullet and wait to see what Nvidia shows up with. If you really want something faster now, why not pick up a used 260 for ~100-125$ to hold you over.

Yes, but there are also rumors floating around that Nvidias new hardware is not so great... Ugly and Devoid of hope


EDIT, again

I am, by no means, a FAN BOY of any particular hardware, I buy / use what what works. I am just passing on info.
 
Yes, but there are also rumors floating around that Nvidias new hardware is not so great... Ugly and Devoid of hope


EDIT, again

I am, by no means, a FAN BOY of any particular hardware, I buy / use what what works. I am just passing on info.


Semi-Accurate.com ala "Charile" Demerjian, I would expect ugly rumors. Especially considering his "road maps" are something that I could gen up in excel in about 15 seconds. Not only that, but he has a premiss that there will be a respin required. Something Nvidia may not even know if it is required. Honestly, that whole article is nothing but speculation just like most of his other ones. 90% of which turn out to be wrong.
 
Sorry guy, calling you out on your lack of knowledge here. Algorithmically hundreds of thousands of particles, cloth, and fluids can't be done efficiently on the CPU. That's why trying to run the same effects on the CPU results in extremely low framerates.

It's also not something you can just break up into multiple independent CPU threads and avoid synchronization with. You see a lot of talk of rigid-body physics on the CPU, occupying 8 threads, but this because the algorithm maps well to the CPU. You can't just throw all the stuff that PhysX is doing today at a CPU and expect more than a few FPS, especially not in the quantities we're seeing. CPU is not good at hundreds of thousands of particles. It's not good at simulating fluids. It's not good at calculating cloth. Just accept the facts and stop with the conspiracy theory.

full of trolls :p

or a physX lover maybe?

anyway, there are lots of engine out there can do exact same thing as physX, WITH CPU PROCESSING it...

good example would be velocity engine like someone else mention above..

physX is just pure advertise thing, it will never and ever truly take the advantage in the market....
 
Sorry guy, calling you out on your lack of knowledge here. Algorithmically hundreds of thousands of particles, cloth, and fluids can't be done efficiently on the CPU. That's why trying to run the same effects on the CPU results in extremely low framerates.

It's also not something you can just break up into multiple independent CPU threads and avoid synchronization with. You see a lot of talk of rigid-body physics on the CPU, occupying 8 threads, but this because the algorithm maps well to the CPU. You can't just throw all the stuff that PhysX is doing today at a CPU and expect more than a few FPS, especially not in the quantities we're seeing. CPU is not good at hundreds of thousands of particles. It's not good at simulating fluids. It's not good at calculating cloth. Just accept the facts and stop with the conspiracy theory.
full of trolls :p

or a physX lover maybe?

anyway, there are lots of engine out there can do exact same thing as physX, WITH CPU PROCESSING it...

good example would be velocity engine like someone else mention above..

physX is just pure advertise thing, it will never and ever truly take the advantage in the market....
Well thanks for saving me some typing, shansoft :cool:.

Anyway, no crap some GPU physics processing is better than CPU-physics processing, the parallel processing of GPU by design lends itself to it. That wasn't the point I was making. The point I was making is that physics effects (dynamic papers, cloth, breaking tiles, and sparks - I can't believe this last one) exclusive to PhysX in Batman are all effects that are incredibly easy to produce without GPU hardware physics. They've been in games for YEARS but all of a sudden they're amazing now? Come on.

Next time read the post for what it says before you get your PhysX panties in a knot.

I'm sorry to say but you're wrong. The VELOCITY® Physics engine can process the following:

* Advanced collision system maintains thousands of simulating objects on next-gen hardware
* Advanced dynamic destruction for scenery & environmental objects
* Accurate vehicle driving dynamics
* Ultra-real human body physics with anatomical joint constraints and simulated muscles/tendons
* Advanced hair and cloth simulation for actors

Cloth and Particles can be done efficiently on a CPU (and are not done efficiently on a GPU). Bump up your resolution and eye candy features with PhysX and you get a slide show (even with a dedicated card). Do the same with the Velocity engine and your graphics card is still the limiting factor. The CPU can easily handle the physical interactions.

The future of Physics will be both the CPU and GPU. A merger of the two (OpenCL). But with CPUs increasing in the amount of Cores contained within and Graphics becoming more and more complex... I see the CPU as being the place for Physics in the near future.
Very interesting, thanks! It makes sense that the different architectures have their different strengths in physics computing. And the Velocity engine is magnificent - definitely the future of quality gaming.
 
Wait for both companies to release their DX11 stuff. There's no way I'd buy until I could read about both in action especially if I had something as good as an 8800 GTX to tide me over until the reviews are in.
 
I must be bored today since I feel a need to comment again now that the thread has taken a turn fore the worse.

I have no chip maker loyalty when it comes to video cards. Neither company has ever done me wrong since I research my buys. Though I do like to cheer for the underdog.

I have seen nothing to date from PhysX to make it a factor in my buying decisions. And even if it ever does show some real signs of life I will still have a hard time factoring it in to my buying decisions because of Nvidias past.

In my opinion Nvidia could care less about IQ and so far that is all I have seen PhysX really used for and often at a sacrifice to in game IQ to anyone unable to run PhysX. Giving me every reason not to support them.

Anyway the reason I say Nvidia as a company could care less about IQ has to do with the way they always seemed to do things. I think the TNT series were the worst offenders but [H] reviews from a few years ago showed that Nvidia was still sacrificing IQ for frame rates.

Now how long before they create another flop like the FX line? And when they do where will that leave all us PC games? I think it will leave us SOL. Just imagine for a moment the GT300 line is a huge flop like the FX line was. We have games in development that have put all there eggs in to the PhysX basket. If the GT300 flops like the FX then they wont even be able to handle these game with PhysX disabled. And with the inability to run ATI with PhysX though an Nvidia card we end up with a game that moved all the cool effects we have come use to over to PhysX and no hardware that is able to display them until the next generation.Wow won't that be great...

I am sorry that you guys who love Nvidia and PhysX can't see that developers putting all the eye candy we have come to expect in a game in to a PhysX can that can only be opened one way is a bad thing for all of us. Those of us with alternate hardware get burned but Nvidia being Nvidia everyone is bound to get burned sooner then latter if things continue like they have been.

The console market has already done enough harm to the PC gaming community we do not need to start hanging out selves.
 
I must be bored today since I feel a need to comment again now that the thread has taken a turn fore the worse.

I have no chip maker loyalty when it comes to video cards. Neither company has ever done me wrong since I research my buys. Though I do like to cheer for the underdog.

I have seen nothing to date from PhysX to make it a factor in my buying decisions. And even if it ever does show some real signs of life I will still have a hard time factoring it in to my buying decisions because of Nvidias past.

In my opinion Nvidia could care less about IQ and so far that is all I have seen PhysX really used for and often at a sacrifice to in game IQ to anyone unable to run PhysX. Giving me every reason not to support them.

Anyway the reason I say Nvidia as a company could care less about IQ has to do with the way they always seemed to do things. I think the TNT series were the worst offenders but [H] reviews from a few years ago showed that Nvidia was still sacrificing IQ for frame rates.

Now how long before they create another flop like the FX line? And when they do where will that leave all us PC games? I think it will leave us SOL. Just imagine for a moment the GT300 line is a huge flop like the FX line was. We have games in development that have put all there eggs in to the PhysX basket. If the GT300 flops like the FX then they wont even be able to handle these game with PhysX disabled. And with the inability to run ATI with PhysX though an Nvidia card we end up with a game that moved all the cool effects we have come use to over to PhysX and no hardware that is able to display them until the next generation.Wow won't that be great...

I am sorry that you guys who love Nvidia and PhysX can't see that developers putting all the eye candy we have come to expect in a game in to a PhysX can that can only be opened one way is a bad thing for all of us. Those of us with alternate hardware get burned but Nvidia being Nvidia everyone is bound to get burned sooner then latter if things continue like they have been.

The console market has already done enough harm to the PC gaming community we do not need to start hanging out selves.
To be honest, I wouldn't worry. As sad as it is, game developers make games to make money; it's a business, plain and simple. Persuading developers to spend all the extra time coding in the PhysX features just to pigeon hole their product to NVIDIA cards costs NVIDIA a good chunk of change. NVIDIA can only do this so much, especially in this economy. Therefore, they take a gamble from a small pool of developers who are A) willing to take on this venture and B) potentially will produce a quality product in the hope that people will run out and buy new cards for for this game (especially if it's bundled). However, this isn't a practical or intelligent business model, IMO, especially considering the current state of NVIDIA and the economy. They evidently are still dreaming of their "killer app," much like the original Aegia was years ago (and we all saw how that turned out).

The moral of the story is the secret to technological progress is accessibility. When GPU hardware physics becomes available to all types of GPU, only then will most developers waste their time on it.
 
To be honest, I wouldn't worry. As sad as it is, game developers make games to make money; it's a business, plain and simple. Persuading developers to spend all the extra time coding in the PhysX features just to pigeon hole their product to NVIDIA cards costs NVIDIA a good chunk of change. NVIDIA can only do this so much, especially in this economy. Therefore, they take a gamble from a small pool of developers who are A) willing to take on this venture and B) potentially will produce a quality product in the hope that people will run out and buy new cards for for this game (especially if it's bundled). However, this isn't a practical or intelligent business model, IMO, especially considering the current state of NVIDIA and the economy. They evidently are still dreaming of their "killer app," much like the original Aegia was years ago (and we all saw how that turned out).

The moral of the story is the secret to technological progress is accessibility. When GPU hardware physics becomes available to all types of GPU, only then will most developers waste their time on it.

Oh I don't worry about it at all as I said I don't even make it a factor in my hardware buying decisions.

I do occasionally get annoyed enough with the whole thing to not buy a game such as the new batman even though I do have Nvidia card right now. But no more so then other crap that the gaming industry likes to pull that keeps me from buying other games I had high hopes for.

It's the principle of the whole thing I can't support and honestly I would feel the same if ATI held the PhysX tech.

Sometimes I think I have just been building systems too long. The marketing BS on all sides has gotten so deep I don't know how any of the tech company's turn a profit after paying for the marketing campaigns. (scratch that, I do know why, it just makes me sick to accept it)
 
Honestly, that whole article is nothing but speculation just like most of his other ones. 90% of which turn out to be wrong.

Sure thing, that's why I labeled it a rumor. But it would surprise me if Nvidia released much more than a more power efficient and slightly faster 285 this year.

I think the bottom line for the OP is to get the baddest card he can afford when he is ready to buy. Check the benchies right before purchase to see what the best bang/buck you can get for the intended purpose.
 
Do you guys really think we've come to the point that a Q6600 isn't enough for a high-end card anymore?

Remember, to 99% of games the Q6600 being quad core doesn't mean jack shit. A Core 2 Duo at 2.4ghz and a Core 2 Quad at 2.4ghz will have damn near identical FPS in most games. And most people will probably agree that a Core 2 Duo at 2.4ghz can quite easily become a limiting factor in games. So don't get hung up on it being a Q6600 ;)

DirectX 11 looks to have some very interesting ideas when it comes to making games more multithreaded, but of course we won't see if it actually works in the real world for probably another year or so.
 
Back
Top