SemiAccurate wrong about Nvidia 480GTX power use

That funny, I haven't seen one review of the cards yet, have you? Noone knows what the TPD or power draw is for the card cept for the reviewers who aint leakin it as I'm sure their is atleast 1 site out there that would love to spill/leak that kinda of info. Least of all Charlie and his GDC someone. GTX480 is 512SPs. Again, the ONLY thing he has been accurate about is the time, and none of that was why he said it was.

What will you do if your wrong ? What if he is right on everything ?


Whats more , considering the gtx480 is launching a full 6 months after the 5870... what if ati has a 5890 out there. Even if it has a 225watt TDP its still a bit higher than the 5870s 188.

Ati can not only use a new spin on cypress to get clocks up but they can also bump up the voltage. Current 5870s are hitting 1ghz + with stock cooling.
 
Clever, but sadly no. Otherwise you'd need one for 2x 5970s aswell and well now that isn't true either.

it's funny as hell as you try so hard to prove Nvidia is GREAT AND ALL MIGHTY when any negative posts/news articles come along.

Im not biased in anyway, but Man you are one of the worst fanboi's with Dualown iv seen in awhile.

Nvidia's 480 is going to be a power hog, and powerwise will be compared to the DUAL GPU 5970.

We all know this coming into Fermi. But yet you tend to think otherwise?...

Eitherway when are you going to give up your Nvidia Crusade and just wait tell March 26th like the rest of us?
 
That funny, I haven't seen one review of the cards yet, have you? Noone knows what the TPD or power draw is for the card cept for the reviewers who aint leakin it as I'm sure their is atleast 1 site out there that would love to spill/leak that kinda of info. Least of all Charlie and his GDC someone. GTX480 is 512SPs. Again, the ONLY thing he has been accurate about is the time, and none of that was why he said it was.

He never said that the top part wouldn't have 512SP's, he just said that it would be a very very low volume part, or have greatly castrated clocks (actually he thought it would be 750mhz/1500mhz for the 480 up until February). And do you know why the part was delayed? Because his version sounds pretty plausible. TSMC + huge chip = unforseen leakage and transistors underperforming.
 
The GTX480 requirements just look ridiculous to me. 15% faster than a 5870 and power requirements almost on par with a 5970? Wow. Better to just buy a 5970 or two 5850s.

What are the rumored power requirements for the GTX470? I think this would be the card most of us would be interested in (for instance, my wife would never let me buy a $500-600 video card).
 
The GTX480 requirements just look ridiculous to me. 15% faster than a 5870 and power requirements almost on par with a 5970? Wow. Better to just buy a 5970 or two 5850s.

What are the rumored power requirements for the GTX470? I think this would be the card most of us would be interested in (for instance, my wife would never let me buy a $500-600 video card).

Not sure on power specs, but the RUMORED price for the 470 (rumors ftl) is $499. Which your wife would not approve of lol.
 
Not sure on power specs, but the RUMORED price for the 470 (rumors ftl) is $499. Which your wife would not approve of lol.

Holy crap. $500 for a card that is roughly equal to a 5870? That's not good. Not good at all. I thought GTX470 was going to be priced competitively at $400 (no idea where I got that from) and the GTX480 at $500-600. I suppose I should pay more attention to the rumors.
 
Holy crap. $500 for a card that is roughly equal to a 5870? That's not good. Not good at all. I thought GTX470 was going to be priced competitively at $400 (no idea where I got that from) and the GTX480 at $500-600. I suppose I should pay more attention to the rumors.

Well like I said its a rumor.

The RUMOR price of the 480 GTX is $599.

March 26th cant come soon enough
 
I believe they think they can place it above a 5870 price but below the 5970 street price, thus 599.

It's too high for me right now so I'll wait and see. Plus both the 5870 and 5970 have DP, which is something I want to see. Since I don't "need" (not that these things are much to do with need) anything stronger GPU wise, I can wait and see how things pan out. I liken this to the days of the first 280's. A slight respin got a mildly (but nicely cooler) respun 285 and things got better. When just gaming the 5870 or even the 5970 are likely to use less power than a comparable 480 or 470/sli (the only real equiv to a 5970). So $/perf and watts/perf are going to be in AMD's favor. I couldn't care less about the whole 3D thing, either in bluray nor in my computer.

But in this case we have a far better performing and featured AMD chip having:

Better perf/$
Cooler running parts
Better power useage at a given perf level
More up to date and future ready features like DP
More developers having created their DX11 works on it (dev's didn't get NV DX11 parts)
Skipped idiotic features like 3D and gave useful ones like Eyefinity
Crossfire works on any mobo - not some SLI certified one
Working with MS for things like Physics and GPU computing via DX11
Mobile parts that prove this chip can run cool - imagine a mobile GPU as fast as a 5770 CF
Developer support for such game changing things as Eyefinity.

AMD is on their game on the desktop and on the mobile front. NV has Optimus, but no DX11 parts to go with it. They know it and they can't escape the fact that next gen consoles will all be DX11 and thus all games by this year end forward will all be DX11. Think about things like Diablo 3 in DX11...

In a few weeks we can all chat about performance. But for now, the world is red, principally because NV counts only making things bigger. In 5 years mobile machines will be more than 75% of the market. That's not a lot of time to get out of the concept of "bigger" chips and into smaller and more compact, using dual and quad configurations to increase capability when required (meaning on a desktop).

We all know these things as academic now. But I sure hope NV learns and prepares better (quickly) to pare things down a bunch and cover both markets far more efficiently. Of course, then you'd have AMD GPU's. :)
 
it's funny as hell as you try so hard to prove Nvidia is GREAT AND ALL MIGHTY when any negative posts/news articles come along.

Im not biased in anyway, but Man you are one of the worst fanboi's with Dualown iv seen in awhile.

Nvidia's 480 is going to be a power hog, and powerwise will be compared to the DUAL GPU 5970.

We all know this coming into Fermi. But yet you tend to think otherwise?...

Eitherway when are you going to give up your Nvidia Crusade and just wait tell March 26th like the rest of us?

I dont try to prove anything toher than corect the bullshit flowing from the fanATIcs mouths in this place.
 
Better perf/$
MSRP for the GTX470 is not 400 bucks, its closer to 325-350 which will give the 5870 a run for its cash here


[/QUOTE]Cooler running parts[/QUOTE]
Really? so you know what the GTX 40 and 480 are at under idle and load with out having seen review one?

[/QUOTE]Better power useage at a given perf level[/QUOTE]
only slightly, 470 vs 5870

[/QUOTE]More up to date and future ready features like DP[/QUOTE]
For real? A port for a monitor that has very little market pen and adds to the cost of the monitor.


[/QUOTE]More developers having created their DX11 works on it (dev's didn't get NV DX11 parts)[/QUOTE]
Ah, devs have Fermi, Devs of metro 2033 have said they have been working on one for a while.

[/QUOTE]Skipped idiotic features like 3D and gave useful ones like Eyefinity[/QUOTE]
Nvidia is bringing this do(albeit for now with SLI), but unlike ATI is extending that to both G200 and GF100 products so even those with older setups can use it.


[/QUOTE]Crossfire works on any mobo - not some SLI certified one[/QUOTE]
There is a reason why Nvidia certifies the boards, its called Q/A.


[/QUOTE]Working with MS for things like Physics and GPU computing via DX11[/QUOTE]
Yeah, get back to us again in another 3 years and let us know how that one turns out. ATI has been talking big about OCL and DC and even banged Haovks drum but has shown shit and supported shit in either areana. Mean while PhysX does get used.


[/QUOTE]Mobile parts that prove this chip can run cool - imagine a mobile GPU as fast as a 5770 CF[/QUOTE]
Would be nice if Nv would do this, but it will take a major change in focus as you suggested.


[/QUOTE]Developer support for such game changing things as Eyefinity.[/QUOTE]
PhysX is also game changing, Metro 2033 will show this.
 
Developer support for such game changing things as Eyefinity.
PhysX is also game changing, Metro 2033 will show this.

how is PhysX a game changing? Metro 2033 is not even out, how do you even know?
Its like saying PhysX is the future without any proof of it....

I dont try to prove anything toher than corect the bullshit flowing from the fanATIcs mouths in this place.


uh? so what you assume and say its correct, but everyone else is wrong..

is that what you mean there?
 
Not only that but Intel and AMD will not like physx become dominate. You will see both amd and intel pushing open cl based solutions so that the cpus are kept in the game. After all why are gamers going to buy 12 core chips later this year and next year if games are only using 1 or 2 cpu cores.
 
The amount of fanboyism here.. I applaud and laugh at you for being so dedicated to a brand.
 
i wonder if any of the video card companies are going to offer lifetime warranty with fermi...
Of course they are going to offer limited lifetime warranty* on Fermi cards.

* Warranty limited to the lifetime of the warranted card.
 
Since when is power consumption a major deal and worth crying about? What is this [G]reen|Forum ?

I see fermi doing decent, I really have doubts it will be a really really bad card as many are predicting. We will see March 26th. I am in the market for a new video card and im not gonna let unofficial jackasses cloud my judgement.
 
Since when is power consumption a major deal and worth crying about? What is this [G]reen|Forum ?

I see fermi doing decent, I really have doubts it will be a really really bad card as many are predicting. We will see March 26th. I am in the market for a new video card and im not gonna let unofficial jackasses cloud my judgement.


So your telling me all things being equal you wouldn't take the less power hungry card ? If the numbers are right the 5870 is 188w while the 470 is 225w. Current rumors is the 5870 will be faster in the majority of cases.

So why wouldn't power matter. Your getting better performance and are paying less for it. Power adds up. You want to go sli or crossfire . The 5870 will have a maximum power draw of 376w but the 470 will hae 450. IT starts to add up.

The most important hing is idle power though . The 5870 is really good at that . HOpefully the geforces are too. I leave my pc on 24/7 so its important to me that it doens't waste energy.
 
The amount of fanboyism here.. I applaud and laugh at you for being so dedicated to a brand.

Agree. Every day it's "nVidiots" crying about "fanATIcs" and vice-versa. I'll give them two weeks.

I also always get a chuckle out of those that frankly don't give a damn about power requirements. That's an asinine way of ignoring prevailing trends in technology (even from a performance perspective). Nevermind the fact that I'll never find a GF100 on my 300W
 
It's not [G]reen|forum but some of us live in places where power is quite expensive and it's warm so you get the double wammy of having a card use a ton of power and then you have to use that much MORE power to cool down your house with AC. So just having a 275W GTX480 would mean 550w of cooling. It adds up.
 
Since when is power consumption a major deal and worth crying about? What is this [G]reen|Forum ?

I see fermi doing decent, I really have doubts it will be a really really bad card as many are predicting. We will see March 26th. I am in the market for a new video card and im not gonna let unofficial jackasses cloud my judgement.

Since a GPU (or dual solution) can saturate an entire mid-range PSU. It was never a problem before because we were still in the nascent stages of the power buildup. But I'm sorry, needing a 1KW PSU for an SLI solution with a quad core CPU & some HD's is ridiculous. I have a small loft room, and I can literally go the winter without heating the place between my 24" CRT and 750W system (of which probably only 400-500 is used).
 
I see Xman didn't get my last memo. 2033 & PhysX certainly isn't "game changing" unless they somehow figured out a way to convince nVidia to allow fine-grained multithreading CPU physics on PC. Otherwise it's going to be less "game changing" than Batman was. I'm fine with GPU physics, but I refuse to support a company and product that artificially castrates the previously supported features from the company they bought the tech from(both ATI + PhysX and CPU + PhysX) in order to promote their rebadged solution.

Also, as far as I know, PhysX doesn't have a lot of supports on the consoles, where the tools are free & CPU fine-grained multithreading is enabled. Far more titles use the Havoc package, and since the biggest differentiating factor between physics packages is speed, most likely PhysX is less efficient than Havoc is (in fact devs prefer Havoc despite PhysX being somewhat more flexible...able to do deformations, some basic fluid dynamics afaik that's the major difference).

This isn't a physics discussion. On performance: it is likely that by the time 480 is in volume production, ATI will have a refresh of its 5870, and will outperform it in a significantly (say 50W for the sake of a reasonable guess) smaller thermal package.

PhysX being a competitive advantage is a stumbling block for ATI, but luckily everyone hates PhysX and wants to move toward platform agnostic physics.
 
Last edited by a moderator:
It's not [G]reen|forum but some of us live in places where power is quite expensive and it's warm so you get the double wammy of having a card use a ton of power and then you have to use that much MORE power to cool down your house with AC. So just having a 275W GTX480 would mean 550w of cooling. It adds up.

There is a flipside to this. Some of us live in places where it's cooler than warmer and a space-heater of a PC comes in handy for much of the year.
:p
 
I actually started laughing, right in the middle of a meeting.

Good show. I love satire.

I said MSRP or do you not know what that means? What they actually go for until units start hitting the market in good quanities is another thing.
 
I see Xman didn't get my last memo. 2033 & PhysX certainly isn't "game changing" unless they somehow figured out a way to convince nVidia to allow fine-grained multithreading CPU physics on PC. Otherwise it's going to be less "game changing" than Batman was. I'm fine with GPU physics, but I refuse to support a company and product that artificially castrates the previously supported features from the company they bought the tech from(both ATI + PhysX and CPU + PhysX) in order to promote their rebadged solution.

Also, as far as I know, PhysX doesn't have a lot of supports on the consoles, where the tools are free & CPU fine-grained multithreading is enabled. Far more titles use the Havoc package, and since the biggest differentiating factor between physics packages is speed, most likely PhysX is less efficient than Havoc is (in fact devs prefer Havoc despite PhysX being somewhat more flexible...able to do deformations, some basic fluid dynamics afaik that's the major difference).

This isn't a physics discussion. On performance: it is likely that by the time 480 is in volume production, ATI will have a refresh of its 5870, and will outperform it in a significantly (say 50W for the sake of a reasonable guess) smaller thermal package.

PhysX being a competitive advantage is a stumbling block for ATI, but luckily everyone hates PhysX and wants to move toward platform agnostic physics.

The Developer himself said Metro WILL make use of multi core/multiThreaded cable CPUs as well as GPUs that support it. He clearly stated they put in the TIME and EFFORT in making sure it worked as they programmed it. Blame the bullshit devs for half assing the PhysX ability to use multicore/threaded capable system.

No, just alot of people, including yourself are more apt to go for half baked physics solutions because the dev figured preprogrammed rail induce nausium was better than doing physics right.
 
The Developer himself said Metro WILL make use of multi core/multiThreaded cable CPUs as well as GPUs that support it. He clearly stated they put in the TIME and EFFORT in making sure it worked as they programmed it. Blame the bullshit devs for half assing the PhysX ability to use multicore/threaded capable system.

No, just alot of people, including yourself are more apt to go for half baked physics solutions because the dev figured preprogrammed rail induce nausium was better than doing physics right.

Again, it isn't a dev thing. It means that Nvidia changed this policy for this title, or changed it recently regardless. Everything Batman & prior, Nvidia has restricted CPU PhysX to a single core.
 
Again, it isn't a dev thing. It means that Nvidia changed this policy for this title, or changed it recently regardless. Everything Batman & prior, Nvidia has restricted CPU PhysX to a single core.

Right. Dude you really need to stop. The SDK for the PhysX library is the same for XBOX360, PS3, Wii, iPhone as it is for the PC. If it isn't using multiple threads on the PC, its because the Developer didn't bother to setup it up to do that. You really need to stop with your strawhat theories.
 
http://www.pcgameshardware.com/aid,706182/Exclusive-tech-interview-on-Metro-2033/News/

"PCGH: It could be read that your game offers an advanced physics simulation as well as a support for Nvidia's PhysX (GPU calculated physics) can you tell us more details here?

Does regular by CPU calculated physics affect visuals only or is it used for gameplay terms like enemies getting hit by shattered bits of blown-away walls and the like?
Dean Sharpe: Yes, the physics is tightly integrated into game-play. And your example applies as well.

PCGH: Besides PhysX support why did you decide to use Nvidia's physics middleware instead of other physics libraries like Havok or ODE? What makes Nvidia's SDK so suitable for your title?

Dean Sharpe: We've chosen the SDK back when it was Novodex SDK (that's even before they became AGEIA). It was high performance and feature reach solution. Some of the reasons why we did this - they had a complete and customizable content pipeline back then, and it was important when you are writing a new engine by a relatively small team

PCGH: What are the visual differences between physics calculated by CPU and GPU (via PhysX, OpenCL or even DX Compute)? Are there any features that players without an Nvidia card will miss? What technical features cannot be realized with the CPU as "physics calculator”?

Oles Shishkovstov: There are no visible differences as they both operate on ordinary IEEE floating point. The GPU only allows more compute heavy stuff to be simulated because they are an order of magnitude faster in data-parallel algorithms. As for Metro2033 - the game always calculates rigid-body physics on CPU, but cloth physics, soft-body physics, fluid physics and particle physics on whatever the users have (multiple CPU cores or GPU). Users will be able to enable more compute-intensive stuff via in-game option regardless of what hardware they have."

Now if they have been usng the SAME FUCKING code since it Novodex, which is WAY before Nvidia ever bought them up. How the hell isn't they are able to keep it working on multicore/threaded capable setups but yet every other Dev doesn't? Simple, they took the TIME and EFFORT to code it to do it. Also I dont know why the name changed from when the article/interview was first published. Maybe Oles Shishkovstov translates into Dean Sharper or DS conducted it or translated it.
 
Last edited:
Right. Dude you really need to stop. The SDK for the PhysX library is the same for XBOX360, PS3, Wii, iPhone as it is for the PC. If it isn't using multiple threads on the PC, its because the Developer didn't bother to setup it up to do that. You really need to stop with your strawhat theories.

Wrong, the SDK is not the same. Each platform uses its own highly tuned version. The PC SDK has a good GPU library, fully supported by Nvidia, and a poor CPU portion that has been apparently left unchanged since the unoptimized heap that Aegia released 6 years ago. I'm not sure about this, but I assume that certain GPU libraries are not available in multi-threaded CPU versions (this was at least the situation with Novadex). That's not hearsay, there's been plenty of discussion and evidence for PhysX awful CPU performance/utilization. The Novadex SDK was free for commercial use, that's probably why they chose it. It certainly wasn't because Aegia's CPU code was better than Havoc's. Nvidia approaches devs to code the extra physics effects for their GPU in exchange for some money. It has been demonstrated that the CPU fallback to the GPU libraries is single core, and in general doesn't seem to fully utilize (never mind efficiently) even that one core. The developer can spend a few months trying to implement a multi-threaded version of the GPU libraries or just use the fallback and save themselves a lot of money. That's why we have such a 'leap' in performance vs CPU.

If Nvidia was shooting straight, their PC SDK libraries would be highly optimized for CPU's as well as GPU's, and then they could point to their GPU performance and demonstrate how much can really be gained by going to GPU side physics. Instead we have the following:

http://techreport.com/articles.x/17618/13
http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/11

Interestingly enough, when you enable PhysX in Batman, your CPU usage goes DOWN http://forum.beyond3d.com/showthread.php?t=54786&page=24

There are at least a half dozen examples. The only PhysX CPU multi-threaded title that I'm aware of is UT3 (very basic physics effects), and I do not believe the bonus Ageia "PhysX" levels (the only GPU physics enabled portions of the game) utilize beyond 1 core (and poorly at best). Note in the Hardocp article, that change in CPU usage with PhysX enabled on CPU vs PhysX enabled on the GPU is a whopping 4%. You'd be naive to think this isn't intentional. Nvidia loves grey marketing tactics like this.

The argument that it is somehow a selling point for Nvidia comes up empty unless GPU physics demonstrate substantial improvements over fully threaded CPU physics. I should have nearly 100% utilization on my Nehalem in scenes where there is a major performance boost when utilizing GPU to calculate. If that doesn't happen then I have to say it's a selling point for their marketing department not their hardware. It sounds like in Metro 2033's case, an easy comparison can be made, so we'll lay that argument to rest soon. I'm relatively confident that what we've seen with PhysX so far can be run at a fairly good clip on top-end CPU's, and that Nvidia is engaging in their usual deceitful marketing practices.

And FYI I'm not using any logical fallacies in this discussion. It's not a straw man argument (what's strawhat lol), well because that's not what straw man means. I'm not misrepresenting your position.
 
Last edited by a moderator:
MSRP for the GTX470 is not 400 bucks, its closer to 325-350 which will give the 5870 a run for its cash here

I GUARANTEE that the gtx 470 will be sold for no less than 399, and more likely 499. Yields are far too shitty for anything less. You have absolutely no data to indicate the MSRPs of which you speak.
Posted via [H] Mobile Device
 
I GUARANTEE that the gtx 470 will be sold for no less than 399, and more likely 499. Yields are far too shitty for anything less. You have absolutely no data to indicate the MSRPs of which you speak.
Posted via [H] Mobile Device

Not to back him on this, but MSRP's are usually set based on target costing, which means that your MSRP is going to be set based on your market analysis, what you believe customers are willing to pay. Their yields won't affect the MSRP, or at least they shouldn't.

That being said, I'm not sure why it matters if it is priced at $399 or $349 or $325 (ok the latter is not going to happen...I don't believe a gpu has ever debuted at $325). It performs between a 5850 and a 5870. Nvidia believe PhysX is a selling point, so they'll price 470 above 5850 and below the 5870. 480 doesn't look like it will be below $499. Gamers will either buy into the premise [that PhysX confers to them a value of $50-100] or not; personally I feel they will not. But in terms of performance it certainly will not be worth the premium, and that's all we're discussing. Who cares if Nvidia can con people into buying their product because of their shady anti-competitive PhysX strategy. The hardware isn't worth it at those prices. As tech enthusiasts that's our #1 concern isn't it.
 
Wrong, the SDK is not the same. Each platform uses its own highly tuned version. The PC SDK has a good GPU library, fully supported by Nvidia, and a poor CPU portion that has been apparently left unchanged since the unoptimized heap that Aegia released 6 years ago. I'm not sure about this, but I assume that certain GPU libraries are not available in multi-threaded CPU versions (this was at least the situation with Novadex). That's not hearsay, there's been plenty of discussion and evidence for PhysX awful CPU performance/utilization. The Novadex SDK was free for commercial use, that's probably why they chose it. It certainly wasn't because Aegia's CPU code was better than Havoc's. Nvidia approaches devs to code the extra physics effects for their GPU in exchange for some money. It has been demonstrated that the CPU fallback to the GPU libraries is single core, and in general doesn't seem to fully utilize (never mind efficiently) even that one core. The developer can spend a few months trying to implement a multi-threaded version of the GPU libraries or just use the fallback and save themselves a lot of money. That's why we have such a 'leap' in performance vs CPU.

If Nvidia was shooting straight, their PC SDK libraries would be highly optimized for CPU's as well as GPU's, and then they could point to their GPU performance and demonstrate how much can really be gained by going to GPU side physics. Instead we have the following:

http://techreport.com/articles.x/17618/13
http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/11

Interestingly enough, when you enable PhysX in Batman, your CPU usage goes DOWN http://forum.beyond3d.com/showthread.php?t=54786&page=24

There are at least a half dozen examples. The only PhysX CPU multi-threaded title that I'm aware of is UT3 (very basic physics effects), and I do not believe the bonus Ageia "PhysX" levels (the only GPU physics enabled portions of the game) utilize beyond 1 core (and poorly at best). Note in the Hardocp article, that change in CPU usage with PhysX enabled on CPU vs PhysX enabled on the GPU is a whopping 4%. You'd be naive to think this isn't intentional. Nvidia loves grey marketing tactics like this.

The argument that it is somehow a selling point for Nvidia comes up empty unless GPU physics demonstrate substantial improvements over fully threaded CPU physics. I should have nearly 100% utilization on my Nehalem in scenes where there is a major performance boost when utilizing GPU to calculate. If that doesn't happen then I have to say it's a selling point for their marketing department not their hardware. It sounds like in Metro 2033's case, an easy comparison can be made, so we'll lay that argument to rest soon. I'm relatively confident that what we've seen with PhysX so far can be run at a fairly good clip on top-end CPU's, and that Nvidia is engaging in their usual deceitful marketing practices.

And FYI I'm not using any logical fallacies in this discussion. It's not a straw man argument (what's strawhat lol), well because that's not what straw man means. I'm not misrepresenting your position.


You are still full of it.If one dev can do right, why can't the others? I say because they are lazy and nothing more. And teh SDk for PhysX is the same across the board, if it wasn't, this bloody well wouldn't be true coming from a DEV THEMSELVES.

"PCGH: You announced that you game will be developed for PC, Xbox 360 and PS3 and that the PC version will come with DX11 and GPU Physx support. So we assume that the Game won't be developed as a pure cross platform title. Is that right? Do you develop the game for each platform separately?
Oles Shishkovstov: Why? I don't understand your logic. The same code executes on all of the mentioned platforms."

From teh link I provided or did you not bother to read it? If the SDK WAS NOT the same, there is no way in hell it would be the SAME CODE for all platforms. And you still ignored a Dev flat out saying IT WORKS. They've had support from Nvidia since they bought Ageia, but yet somehow, mysteriously they let the Devs of Metro 2033 utilize multisore/threaded cpus for CPU physx and NO FUCKING OTHER DEV, dude you seriously need a wake up call. Its the Devs being lazy and nothing more.
 
Last edited:
You are still full of it.If one dev can do right, why can't the others? I say because they are lazy and nothing more. And teh SDk for PhysX is the same across the board, if it wasn't, this bloody well wouldn't be true coming from a DEV THEMSELVES.

"PCGH: You announced that you game will be developed for PC, Xbox 360 and PS3 and that the PC version will come with DX11 and GPU Physx support. So we assume that the Game won't be developed as a pure cross platform title. Is that right? Do you develop the game for each platform separately?
Oles Shishkovstov: Why? I don't understand your logic. The same code executes on all of the mentioned platforms."

From teh link I provided or did you not bother to read it? If the SDK WAS NOT the same, there is no way in hell it would be the SAME CODE for all platforms.

So explain to us why some Physx games with the physx card disabled only use 1 core when it has 3 more cores to use?

Do not come and tell me they cannot use those extra 3 cores for physx....they can...they just aren't for a reason.
 
So explain to us why some Physx games with the physx card disabled only use 1 core when it has 3 more cores to use?

Do not come and tell me they cannot use those extra 3 cores for physx....they can...they just aren't for a reason.

Read my edit and get a clue.
 
You are still full of it.If one dev can do right, why can't the others? I say because they are lazy and nothing more. And teh SDk for PhysX is the same across the board, if it wasn't, this bloody well wouldn't be true coming from a DEV THEMSELVES.

"PCGH: You announced that you game will be developed for PC, Xbox 360 and PS3 and that the PC version will come with DX11 and GPU Physx support. So we assume that the Game won't be developed as a pure cross platform title. Is that right? Do you develop the game for each platform separately?
Oles Shishkovstov: Why? I don't understand your logic. The same code executes on all of the mentioned platforms."

From teh link I provided or did you not bother to read it? If the SDK WAS NOT the same, there is no way in hell it would be the SAME CODE for all platforms. And you still ignored a Dev flat out saying IT WORKS. They've had support from Nvidia since they bought Ageia, but yet somehow, mysteriously they let the Devs of Metro 2033 utilize multisore/threaded cpus for CPU physx and NO FUCKING OTHER DEV, dude you seriously need a wake up call. Its the Devs being lazy and nothing more.

dude, are you supposed to be smart? cause you come off sounding like a total ass with no references to back you up other than one developer who has to speak well of the product he is using caues if he didn't, he's get screwed.
 
dude, are you supposed to be smart? cause you come off sounding like a total ass with no references to back you up other than one developer who has to speak well of the product he is using caues if he didn't, he's get screwed.

So your contention is then that he is lieing about his code being the same for all platforms he made it for and that PhysX can and will use multicore/multithreaded setups for physics and performance increases?
 
Back
Top