Quad core means no physics card needed

Deathspine

Weaksauce
Joined
Oct 15, 2006
Messages
73
Just read an article at the ING

http://http://www.theinquirer.net/default.aspx?article=39930

Unreal Tournament 2007 with the engine that was to bring the physics card to the forefront will be running just fine with a quad core cpu. Unless something big is happening with physics cards and with the rise of the quad core could we be saying good bye to the physic cards?

An I was looking to get a PCI-E physics card for Ghost Recon Advanced Warfighter 2 out this month.
 
Just read an article at the ING

http://http://www.theinquirer.net/default.aspx?article=39930

Unreal Tournament 2007 with the engine that was to bring the physics card to the forefront will be running just fine with a quad core cpu. Unless something big is happening with physics cards and with the rise of the quad core could we be saying good bye to the physic cards?

An I was looking to get a PCI-E physics card for Ghost Recon Advanced Warfighter 2 out this month.


Just so you know Deathspine, that is false information in the context you are taking it. Just leaving it at that. UE3 CAN run on a dual core as well (there is your hint!)
 
Well if Ageia's PhysX card isn't been adopted at a fast rate it's due to lack of software usable today, and I personally think that not making them in PCI only is a mistake. Most of the ultra high end gaming machines are going to have two video cards each blocking at least one regular PCI slot in most cases. Throw in a PCI sound card and maybe something else, all that will be left if we are lucky are PCIe x1 slots. Plus their initial price was a turn off for many people due to lack of support. Right now the PhysX card just isn't a good value. It will cost you around $200 and the things that get added to the few games that leverage it are minimal eye candy type things. Without a killer app, the PhysX card is a novelty item at best.

Given that, I would imagine that NVIDIA, software physics, and quad core/multi-core CPUs will kick the PhysX card to the wayside within a short period of time from now. That being said the PhysX card is an amazing piece of hardware. It certainly is capable of far more than software, and multi-core CPU physics are capable of. Unfortunately none of that matters right now.
 
I believe that crysis will also be using dual and quadcores for physics... so it seems that the PPU is obsolete for now.
 
Well if Ageia's PhysX card isn't been adopted at a fast rate it's due to lack of software usable today, and I personally think that not making them in PCI only is a mistake. Most of the ultra high end gaming machines are going to have two video cards each blocking at least one regular PCI slot in most cases. Throw in a PCI sound card and maybe something else, all that will be left if we are lucky are PCIe x1 slots. Plus their initial price was a turn off for many people due to lack of support. Right now the PhysX card just isn't a good value. It will cost you around $200 and the things that get added to the few games that leverage it are minimal eye candy type things. Without a killer app, the PhysX card is a novelty item at best.

Given that, I would imagine that NVIDIA, software physics, and quad core/multi-core CPUs will kick the PhysX card to the wayside within a short period of time from now. That being said the PhysX card is an amazing piece of hardware. It certainly is capable of far more than software, and multi-core CPU physics are capable of. Unfortunately none of that matters right now.

Yea pretty much. Don't get me wrong I really liked the idea. Need a pci express version though and a bunch of games that use it. Hell having it as an addon board or something that you can plug into your video card would have even been better. That would require the support of nvidia or ati though.
 
Yea pretty much. Don't get me wrong I really liked the idea. Need a pci express version though and a bunch of games that use it. Hell having it as an addon board or something that you can plug into your video card would have even been better. That would require the support of nvidia or ati though.

I wouldn't be surprised if NVIDIA and ATI weren't working pretty hard to get GPU based physics processing off the ground. That way they can sell you a third or even a fourth high end video card.

2kw PSU's here we come!
 
I think we had all better get dual Quad core cpus and Gpus just to be future proof up until at least summer 08.
 
2kw PSU's here we come!

Dear God no, who will think of the electric bills? THE ELECTRIC BILLS??!!?

Anyways, when I heard about the PhysX card, I wasn't overly impressed, and still am not. It's a cool idea, but really does just seem like extra threads that can be loaded on a different core instead of a whole new PCI card. Especialy if the rumors are true next month about the Q6600 being $266, I think quad-core is going to become VERY mainstream. If not then I think many computer game players will be able to upgrade to it. For most it'll be a simple CPU-only upgrade.
 
Dear God no, who will think of the electric bills? THE ELECTRIC BILLS??!!?

Anyways, when I heard about the PhysX card, I wasn't overly impressed, and still am not. It's a cool idea, but really does just seem like extra threads that can be loaded on a different core instead of a whole new PCI card. Especialy if the rumors are true next month about the Q6600 being $266, I think quad-core is going to become VERY mainstream. If not then I think many computer game players will be able to upgrade to it. For most it'll be a simple CPU-only upgrade.

You need to understand that the Ageia PhysX card is capable of FAR more in terms of physics processing than the fastest quad core CPUs are.

There is alot working against Ageia at this point but the hardware is capable of more than people are giving it credit for.
 
You need to understand that the Ageia PhysX card is capable of FAR more in terms of physics processing than the fastest quad core CPUs are.

There is alot working against Ageia at this point but the hardware is capable of more than people are giving it credit for.

Fair enough, but until people can actually SEE what it is capable of in something they actually want to use, who wants anything to do with it?
 
Basically current physics cards are just like the Voodoo Monster 3D addon card from back in the day. Eventually they will be used and embedded in everything, but until then they are just a gimmick.
 
Personally I what physics with Ghost Recon Advance Warfigther 2. If Ageia can release a PCI-E version in the next couple of weeks I'll buy. If not then its quad cord for the future as I say good bye for now to Ageia. If Ageia's card is capable of more then another thread on a CPU I'll just wait for HardOCP to run the tests.
 
Personally I what physics with Ghost Recon Advance Warfigther 2. If Ageia can release a PCI-E version in the next couple of weeks I'll buy. If not then its quad cord for the future as I say good bye for now to Ageia. If Ageia's card is capable of more then another thread on a CPU I'll just wait for HardOCP to run the tests.

The Ageia PhysX card has been covered in a couple of editorials on the [H] already.

http://hardocp.com/article.html?art=MTAyMCwsLGhlbnRodXNpYXN0
http://hardocp.com/article.html?art=MTA1MiwsLGhlbnRodXNpYXN0

The truth about the hardware is that it can do alot, but it doesn't matter without software to back it up. I'm no programer, but I would think that it would be easier to use a software SDK for creating physics in games and optimize it for multi-core systems than to do anything else. Using video cards to do the work is also a compelling way to go at least for the consumers who have high end machines and plenty of PCIe slots to spare.
 
The PhysX SDK IS a software SDK that will run on as many cores as you throw at it. If you have the card it will offload to that, but still use any spare CPU as well. One solution for multiple configurations seems pretty nice to me.
 
Basically current physics cards are just like the Voodoo Monster 3D addon card from back in the day. Eventually they will be used and embedded in everything, but until then they are just a gimmick.

Except that the Monster 3D actually worked ;)
 
...
Unreal Tournament 2007 with the engine that was to bring the physics card to the forefront will be running just fine with a quad core cpu. Unless something big is happening with physics cards and with the rise of the quad core could we be saying good bye to the physic cards?
I do not frequent this subforum a lot, but I swear I have heard the `omg Quad-Core will kill physX' argument about 10,000 times. I think what people need to understand that a well-developed application specific integrated circuit (ASIC) will beat a general-purpose CPU for power consumption, performance or both. However, the PhysX ASIC is now `old' and the lack of performance information makes it hard to compare it to a current gen QC CPU.

Basically current physics cards are just like the Voodoo Monster 3D addon card from back in the day. Eventually they will be used and embedded in everything, but until then they are just a gimmick.

The Monster 3D card gave very tangible benefits back in the day. Currently, there is no killer-app for the PhysX card as far as I know.
 
[RCKY] Thor;1031149094 said:
The PhysX SDK IS a software SDK that will run on as many cores as you throw at it. If you have the card it will offload to that, but still use any spare CPU as well. One solution for multiple configurations seems pretty nice to me.

Yes and look what it's done for gaming today. :rolleyes:
 
I do not frequent this subforum a lot, but I swear I have heard the `omg Quad-Core will kill physX' argument about 10,000 times. I think what people need to understand that a well-developed application specific integrated circuit (ASIC) will beat a general-purpose CPU for power consumption, performance or both. However, the PhysX ASIC is now `old' and the lack of performance information makes it hard to compare it to a current gen QC CPU.

Not at all, actually.
PhysX is orders of magnitude faster as a single core, and multicore doesn't scale very well. A quadcore is not 4 times as fast as a singlecore, not in physics anyway.
It will take a LONG time before CPUs actually beat the original PhysX PPU.
Just like those same quadcore CPUs would still have a hard time beating an old VooDoo card in software rendering.

The Monster 3D card gave very tangible benefits back in the day. Currently, there is no killer-app for the PhysX card as far as I know.

The problem with physics is that you can't just scale it up very easily.
3d accelerators could allow 32-bit colour, texture filtering, high resolutions, antialiasing etc with little or no changes to the game engine or content.
A PhysX card can handle many times more moving/colliding objects than a CPU or GPU can... But each of those objects will have to be modeled into the game's levels. A PhysX card won't do much for a game like HalfLife2, because even with software physics, it's already limited by the GPU, not by the CPU/physics. You could offload it to a PPU, but it's not the bottleneck, so it's not going to speed up your game. In fact, it may slow down the game, because you introduce extra overhead. If you use such an accelerator, you had better make sure you give it enough work to make it worth your while.
It's like with a GPU... It's good at rendering 3d... but only if you batch up your workload. It's too much overhead to just have it render one pixel at a time, like you'd do with software rendering. You have to give it sufficient workload at a time to mask the overhead.
Current games simply aren't designed to take advantage of a PPU.
 
Not at all, actually.
PhysX is orders of magnitude faster as a single core, and multicore doesn't scale very well. A quadcore is not 4 times as fast as a singlecore, not in physics anyway.
It will take a LONG time before CPUs actually beat the original PhysX PPU.
Just like those same quadcore CPUs would still have a hard time beating an old VooDoo card in software rendering.
Are there any benchmarks that show this? What performance metric is used? As I said, I do not frequent this forum much, but last I heard there wasn't ANY benchmark for the PhysX chip, just some marketing numbers.
 
Intel's website http://www.intelcapabilitiesforum.net/ISF_demo?s=a They have a demo that is multithreaded for intel cpu's.Again this is based on PhysX engine SDK.

I dunno about you but the more I read the more I see PhysX helping multithreaded game engines and development, never before had we the opportunity to stress multiple cores and cpu in games like we do today.

Bashing Ageia won't help as far as I'm concerned they made everyone aware of physics as a choking point in games and the fact that multithreading is the closest you can get to scalability right now.Multithreaded coding is so difficult for the average developper that it cannot justify the cost in developement in games for smaller companies and Ageia is bringing a solution to the table.

Free royalty SDK, I mean come on Ageia didn't all the right moves, new company can you blame them?And they're putting the best effort out there right now in SDK development and the whole core vs ppu is moot.

Best example, Unreal 3, it's based on physx, no physx no multicores acceleration, so why try to kill the ppu and the company behind it, if they're going to help both sides, multicores and ppu.

I just don't get why people bash so hard on Ageia...
 
Intel's website http://www.intelcapabilitiesforum.net/ISF_demo?s=a They have a demo that is multithreaded for intel cpu's.Again this is based on PhysX engine SDK.

I dunno about you but the more I read the more I see PhysX helping multithreaded game engines and development, never before had we the opportunity to stress multiple cores and cpu in games like we do today.

Bashing Ageia won't help as far as I'm concerned they made everyone aware of physics as a choking point in games and the fact that multithreading is the closest you can get to scalability right now.Multithreaded coding is so difficult for the average developper that it cannot justify the cost in developement in games for smaller companies and Ageia is bringing a solution to the table.

Free royalty SDK, I mean come on Ageia didn't all the right moves, new company can you blame them?And they're putting the best effort out there right now in SDK development and the whole core vs ppu is moot.

Best example, Unreal 3, it's based on physx, no physx no multicores acceleration, so why try to kill the ppu and the company behind it, if they're going to help both sides, multicores and ppu.

I just don't get why people bash so hard on Ageia...

It doesn't matter what Ageia has done for the industry or what they did not do. What counts is that at the end of the day, they do not have a product people want to buy. For all that their SDK may be capable of, there is nothing being put into games that makes a difference right now and there is absolutely ZERO reason to buy a PhysX card at this time.
 
Are there any benchmarks that show this? What performance metric is used? As I said, I do not frequent this forum much, but last I heard there wasn't ANY benchmark for the PhysX chip, just some marketing numbers.

The PhysX (formerly NovodeX) SDK.
You can run it on both the hardware or on your CPU.
I've not actually tried running it on a quadcore... But the single-core and dual-core tests I've ran aren't exactly an indication that 2 extra cores are going to close the gap anytime soon, if you know what I mean.
Here's some movies of the hardware running some of the examples in realtime: http://www.youtube.com/watch?v=gMQDPLcqt8s

You can register as a developer with Ageia and download the PhysX SDK yourself, so you can run it on your own CPU in software, and draw your conclusions.
And no, it's not biased because it's PhysX.
The SDK long predates the PhysX hardware, and was originally developed as the NovodeX SDK, and was purely software... It's used in various commercial games, mostly on XBox.
So it has a history as a reasonably successful commercial software physics engine. The CPU-part is very good.
 
While the PhysX is, from what I've heard, pretty much not worth the cash by any leap of the imagination, your claim is faulty as, by your reasoning, you could also extend it to video cards. There's 4 cores after all, why not put one on video processing?

That's because video card processors (as well as physics processors) are MUCH more efficient at doing what they're designed. Traditional CPU's are limited in this way because they are designed to do anything. Discrete processors are much faster for their application because they're designed to do exactly that, and nothing else.

Think of it this way: what would you rather carve a turkey with, a 100 function pocket knife, or a carving knife? The pocket knife CAN do it, and can also do a ton of other things, but will not be as good at a specific application than a tool designed for that application.
 
Think of it this way: what would you rather carve a turkey with, a 100 function pocket knife, or a carving knife? The pocket knife CAN do it, and can also do a ton of other things, but will not be as good at a specific application than a tool designed for that application.

I like the analogy. :)
 
As I understand it, the quad core we have today isn't true QC, in the sense that the dies are split, thus creating latency lag between the two cores. How efficient could that be for processing physics in-game?
 
As I understand it, the quad core we have today isn't true QC, in the sense that the dies are split, thus creating latency lag between the two cores. How efficient could that be for processing physics in-game?

That is the theory that AMD is trying to sell with their "native quad core" design. And while it sounds good in theory, we have yet to see it's affect in real life applications. Intel's quad core is the only quad core we have right now and preliminary benchmarks for Barcelona are NOT looking good. Thus far, the theory isn't holding up well.
 
As I understand it, the quad core we have today isn't true QC, in the sense that the dies are split, thus creating latency lag between the two cores. How efficient could that be for processing physics in-game?

Quadcore means nothing more or less than 4 cores.
AMD even stretches it as far as the 4 cores not having to be on the same socket, with their 4x4-solution (ofcourse then they can only speak of a quadcore *system* while Intel offered quadcore *processors*).

There's no rule that says the cores have to be on the same die, or whatever. This whole 'true/native' quadcore thing is just garbage from the AMD spindoctors. Now that we've seen the first results of their 'native' quadcore processors, we know why they need this garbage... They aren't going to sell the CPU on its performance alone. They are being beaten by 'non-native' quadcore solutions from Intel (Penryn is still two dualcore dies).

As for the lag... Lag is a result of the design of the chip. There's no guarantee that a single die has low latencies. Athlon64 is a good example of that. Its latencies are much closer to that of the Pentium D than to the Core2 Duo. That's because the Athlon still uses an external bus to synchronize both caches, much like the FSB system on the Pentium D, while the Core2 Duo is designed with only one L2-cache, shared by both cores. Athlon's dualcore design (and performance) is nearly identical to two single-core CPUs on a 2-socket motherboard with HT-links.
The same still goes for Barcelona... It still has separate L2-caches, it only has a (very small) shared L3-cache, so it will still have higher latencies than Core2, at least for the pairs of cores that share their L2-cache.
So it's not exactly a great 'native' design. They could have gotten much more of an advantage out of the single-die design if they had shared L2-caches, and larger caches at that (drop the L3 altogether, just make the L2 larger).

The benchmarks speak volumes... All this talk about their super-fast native quadcore was just a load of crap, we don't see anything of that in the actual performance of the chip.
 
I like this forum, there are lots of knowledgeable people providing input. I will now wait for the PCI-E physics card. I will use it to support my quad core cpu (to be purchase when the prices drop in July and a game actually could use it - not Supreme Commander - I am first person shooter guy). Thanks guys for sharing your knowledge.
 
PCIe physics card won't do squat... The problem with the card isn't that it's PCI, it's that there is virtually zero support for it.
 
PCIe physics card won't do squat... The problem with the card isn't that it's PCI, it's that there is virtually zero support for it.

Well I want a PCIe card just because I have so few PCI slots.
 
Quadcore means nothing more or less than 4 cores.
AMD even stretches it as far as the 4 cores not having to be on the same socket, with their 4x4-solution (ofcourse then they can only speak of a quadcore *system* while Intel offered quadcore *processors*).

There's no rule that says the cores have to be on the same die, or whatever. This whole 'true/native' quadcore thing is just garbage from the AMD spindoctors. Now that we've seen the first results of their 'native' quadcore processors, we know why they need this garbage... They aren't going to sell the CPU on its performance alone. They are being beaten by 'non-native' quadcore solutions from Intel (Penryn is still two dualcore dies).

As for the lag... Lag is a result of the design of the chip. There's no guarantee that a single die has low latencies. Athlon64 is a good example of that. Its latencies are much closer to that of the Pentium D than to the Core2 Duo. That's because the Athlon still uses an external bus to synchronize both caches, much like the FSB system on the Pentium D, while the Core2 Duo is designed with only one L2-cache, shared by both cores. Athlon's dualcore design (and performance) is nearly identical to two single-core CPUs on a 2-socket motherboard with HT-links.
The same still goes for Barcelona... It still has separate L2-caches, it only has a (very small) shared L3-cache, so it will still have higher latencies than Core2, at least for the pairs of cores that share their L2-cache.
So it's not exactly a great 'native' design. They could have gotten much more of an advantage out of the single-die design if they had shared L2-caches, and larger caches at that (drop the L3 altogether, just make the L2 larger).

The benchmarks speak volumes... All this talk about their super-fast native quadcore was just a load of crap, we don't see anything of that in the actual performance of the chip.

The point is well taken. I said that b/c I get into constant arguments about having a QC CPU in my next rig with a friend of mine, and that's the argument he gives me: Intel's Kentsfield is not true QC b/c it's split die.
 
The physics card will enhance Ghost Recon Advance Warfighter 2. I have an extra PCI-E slot, but both PCI slots are full with a sound card and TV tuner. PCI-E is more future proof. I do not play a lot of games but if I like one, I like to enjoy all the eye candy. Ghost Recon Advance Warfighter 2 is to use more physics then the first one. If that is the only game the uses the card this year so be it. There is money and then there is the cost of missing all that eye candy.
 
The physics card will enhance Ghost Recon Advance Warfighter 2. I have an extra PCI-E slot, but both PCI slots are full with a sound card and TV tuner. PCI-E is more future proof. I do not play a lot of games but if I like one, I like to enjoy all the eye candy. Ghost Recon Advance Warfighter 2 is to use more physics then the first one. If that is the only game the uses the card this year so be it. There is money and then there is the cost of missing all that eye candy.

I'm not sure where you are going with this, but the fact is that it seems a bit odd to spend $179.99 for a card that only enhances one game. It doesn't do alot in other games. Basically the GRAW games get a small improvement and that's all.

I'll will have one PCI slot open here shortly when I install my Danger Den NV-88GTX waterblocks on my 8800GTX's, but I doubt I'll waste it with a PhysX card.
 
I'm not sure where you are going with this, but the fact is that it seems a bit odd to spend $179.99 for a card that only enhances one game. It doesn't do alot in other games. Basically the GRAW games get a small improvement and that's all.

I understand _why_ you said this, but I think that you're unable to see the forest for the trees here. While I grant that developer support has been spotty, it's got to start somewhere. And, unlike video acceleration, physics acceleration requires dedicated effort to utilize it. I suppose it is akin to developers and their use (or lack thereof) of the XBox360 HDD. Developers have to cater to the lowest common denominator. It's a vicious circle, but such is life.

It is true that PCIe will not help insofar as bandwidth. PCI already supplies more bandwidth than the card can saturate. But, I do agree that it may help adoption for those people that don't want to use "legacy" components.

We're just now on the cusp of big-wave first-tier support (UE3, etc) for the engine on the PC side. How the card does in those environments will be the deciding element. I have a PhysX card. I bought it because at the time I was doing a "once in a blue moon" upgrade and was already spending 2500+ on my rig. The price tag of $150 for the card was insignificant relatively and should the PhysX support take off, I'll be prepared. However, as a hobbyist who has worked with the PhsyX SDK, I must say that it's a mighty fine API and it's a shame that people aren't using it more.
 
Well I want a PCIe card just because I have so few PCI slots.

Right, if I had to pick one I'd want PCIe as well, my point however was to hold off completely until it actually has some [good] support, which may never happen.
 
I understand _why_ you said this, but I think that you're unable to see the forest for the trees here. While I grant that developer support has been spotty, it's got to start somewhere. And, unlike video acceleration, physics acceleration requires dedicated effort to utilize it. I suppose it is akin to developers and their use (or lack thereof) of the XBox360 HDD. Developers have to cater to the lowest common denominator. It's a vicious circle, but such is life.

I don't care about what might be. If they make some games that actually have real worthwhile content and effects that require a PhysX card, I won't buy one. Simple as that.

It is true that PCIe will not help insofar as bandwidth. PCI already supplies more bandwidth than the card can saturate. But, I do agree that it may help adoption for those people that don't want to use "legacy" components.

It isn't that we don't want to use legacy components all the time. There are physically unsed PCIe x1 slots on most of our motherboards and no PCI slots. Many of us have the only ones in use by other devices, or they are blocked by our video cards.

We're just now on the cusp of big-wave first-tier support (UE3, etc) for the engine on the PC side. How the card does in those environments will be the deciding element. I have a PhysX card. I bought it because at the time I was doing a "once in a blue moon" upgrade and was already spending 2500+ on my rig. The price tag of $150 for the card was insignificant relatively and should the PhysX support take off, I'll be prepared. However, as a hobbyist who has worked with the PhsyX SDK, I must say that it's a mighty fine API and it's a shame that people aren't using it more.

Again, if they actually do something worth while with it, and the card will bring more to the table then what I've seen so far, then I will buy one. I'll have an open PCI slot shortly, but they really need a PCIe version.
 
The management team behind this physics card never did its basic homework.

Are you familar with the system specs of the average gamer's machines? Steam ran a survey and the results will shock those who are always immersed or obsessed with the latest hardware. Less than 5 percent of the user base has Vista with an 8800 series card.

The idea that a $150 physics card would gain ground at this stage is ludicrous. Even at the high end, we are shelling out serious bank for Vista plus DX10 parts plus dual/quad core. To add something as niche as a physics card to that list is simply not viable product strategizing. And within this sliver of a market, they lose the SLI people off the bat.

Hopeless.
 
The point is well taken. I said that b/c I get into constant arguments about having a QC CPU in my next rig with a friend of mine, and that's the argument he gives me: Intel's Kentsfield is not true QC b/c it's split die.

It's silly that people even bother to argue about that.
It's just like the onboard memory controller... People would argue that Athlon64 was much faster than Pentium because of the integrated controller.
Then Core2 comes along, and beats the Athlon64 in every way possible... But it's still on the exact same chipsets, motherboards and memory controllers as the Pentiums were.

Bottom line is: It's not how you design it, it's how well it performs.
All these design ideas are nice in theory, but if you cannot implement them in such a way that you actually get a faster processor, who cares?
Heck, if the fastest processor was powered by blue chipmunks with pink scarfs, I'd still buy it. See what I mean?

Ofcourse we all know that in theory an onboard controller or native quadcore could have some advantages, but as long as they aren't the faster processors, it means nothing. People who argue about that sort of stuff just don't get it.
 
Back
Top