The Next Big Thing?

Wesley1357 said:
what?

i heard that sony is not going to release the ps3 now immediately.

I think what your earlier comment was supposed to say was "more or less" ;). But I understand your point. PS3 is not going to be launched for a while yet.
 
I definetly like the idea of a GPU socket. It would surely make things interesting.

It could, however, also suck. If the socket were integrated on the motherboard rather than a card with a socket on it, then Nvidia and ATI would probably use a different pinout.

This just makes it that much more confusing for the average PC user.

I guess it wouldn't be TOO bad, though, since SLI and Crossfire cannot exist on the same board as it is.
 
it would likely be either a 'riser' card or a simply card like they today but with replaceable gpu's via a socket.
at this point in time, jamming all those features of a full-fledged videocard, including the memory, the capacitors, and the socket for the gpu itself... i dont see that being possible without a seperate riser (of which could be a created standard), but likely all within a pci-e card or something similar.
 
dR.Jester said:
Yea, that means video cards will be $900. ;)

Wrong. They will be more! Asus has a dual core 512mb 7800GT with an external 80w psu.

and guess what, its 900 dollars and can be run in sli.
 
I think you need an SLI board to run those "dual core" cards anyway. It's just an SLI setup without having two cards.
 
more efficient systems, use less power, put out less heat, same or higher performance.
 
kirbyrj said:
I think you need an SLI board to run those "dual core" cards anyway. It's just an SLI setup without having two cards.

correct. to an extent.
 
Elios said:
lets see 3000 rigid bodys via CPU or 10,000 with the add in card + it can fluids btw air is a fluid and it can do many other types of phyics too like hair and cloth

you mean 300-500 for a cpu. A PPU will f*cking rape dual core. ~1000 objects vs. 8,000-32,000. sounds a lot better to me. Expecially since it wouldnt be much more expensive. ppu's gonna be 300 or less.
 
How is Windows Vista gonna change the vid card scene? And when is it due to arrive?
 
psychot|K said:
How is Windows Vista gonna change the vid card scene? And when is it due to arrive?

well vista is probably gonna change the MONITOR scene, because of that protection software or whatever that keeps stuff from showing at full res if it's not registered or approved or something. You are supposed to need hardware in your monitor that supports this to make it work.

The big thing in graphics cards is DirectX 10. I'm not sure of the details, but expect unified pipelines within a generation or two as well.

O yea, expect vista in about a year i think.
 
Vista and D3D10 will drastically change graphics in your games. I'm not going to type out all of the details here, but D3D10 games will look simply awesome and run extremely fast. Games will be able to have more textures, more animation (both in terms of animation skinning and particles/debris flying around), faster shadows, and so on and so forth. D3D10 is, without a doubt, the big thing the OP is asking for in the next year.
 
I am looking forward to the Creative Videoblaster e-pen0s edition. It will come with a collectors edition dildo that acts like an external heat-sink when screwed into the back of the card.
 
psychot|K said:
What's the benefit to users of this 'monitor protection' stuff in vista?

Nothing. HD DVDs will be downconverted to 960x540 if you do not have a HDMI/HDCP connection to the monitor. So RGB and DVI monitor connections for movie playouts will be downconverted from 1024x720 to 960x540.

Doesn't hurt games or normal computer use.
 
Verge said:
ageia is a sham
Wesley1357 said:
please. explain?
Ageia is probably going under.

The word is that GPUs are going dual-core with CPUs in the near future. The sister core will probably be used for physics processing. Thus eliminating the need for a separate physics processor.
 
Skolar said:
Ageia is probably going under.

The word is that GPUs are going dual-core with CPUs in the near future. The sister core will probably be used for physics processing. Thus eliminating the need for a separate physics processor.

That would still be slower than a dedicated physics chip like Ageia is showing off.

It will be interesting to track the progress on this front and see which way the industry goes, it is all up to the game developers.
 
From the [H] workshop @ Quakecon it seems that the physics chip is going to be insanely powerful. So I vote for Ageia as the next big/expensive thing.
 
Originally Posted by Skolar
Ageia is probably going under.

The word is that GPUs are going dual-core with CPUs in the near future. The sister core will probably be used for physics processing. Thus eliminating the need for a separate physics processor.

Don't forget that AGEIA is more than just the PhysX chip. They also own Novodex, which they've licensed out to a ton of video game companies, including Unreal Engine 3. They're a LONG way from going under.
 
Cypher19 said:
Don't forget that AGEIA is more than just the PhysX chip. They also own Novodex, which they've licensed out to a ton of video game companies, including Unreal Engine 3. They're a LONG way from going under.

Yeah, even if the PhysX turns out to be a total flop (which I don't think it will), AGEIA will remain in business.

Also, GPUs cannot be dual-core in the same sense as a dual-core CPU.. You could even say that they are already multi-core because they have multiple pipelines. If you had two physical cores on one die, you would see the same performance gain as you see with SLI. Games cannot be "GPU-multithreaded" like they can be "CPU-multithreaded". The best you can do is to split the load between two cores evenly, as is done in SLI. Also, a GPU isn't specially designed to do physics, whereas a PhysX is, so using a second GPU core to do physics is a bad idea.

Someone please correct me if I'm wrong with what I said above but I think it's correct.
 
HOCP4ME said:
Yeah, even if the PhysX turns out to be a total flop (which I don't think it will), AGEIA will remain in business.

Also, GPUs cannot be dual-core in the same sense as a dual-core CPU.. You could even say that they are already multi-core because they have multiple pipelines. If you had two physical cores on one die, you would see the same performance gain as you see with SLI. Games cannot be "GPU-multithreaded" like they can be "CPU-multithreaded". The best you can do is to split the load between two cores evenly, as is done in SLI. Also, a GPU isn't specially designed to do physics, whereas a PhysX is, so using a second GPU core to do physics is a bad idea.

Someone please correct me if I'm wrong with what I said above but I think it's correct.

sounds right to me. I can't wait for physx. :)
 
HOCP4ME said:
Also, a GPU isn't specially designed to do physics, whereas a PhysX is, so using a second GPU core to do physics is a bad idea.

Someone please correct me if I'm wrong with what I said above but I think it's correct.

I disagree with this. GPU's programmability today is very flexible and lends its way to the ability to do other things besides 3D graphics acceleration. If used correctly GPU's can do many specific things many times faster than a CPU, such as Physics.
 
Opening up the GPU to other applications in order to offload work from the CPU. Distributed Computing for example. The GPU is really an under utilized piece of hardware.

People are already playing around w/this. The scientific community, were the first to spark the idea, ATI, from what I read, is seriously looking into this, but I haven't heard anything from Nvidia.

http://www.cs.sunysb.edu/~vislab/projects/urbansecurity/GPUcluster_SC2004.pdf
http://www.csit.fsu.edu/~blanco/gpusc/gpusc_project.htm

As for that PPU, or PhysX processor, either ATI or Nvidia will buy them out, buy a license in order to build it into their own GPU's, or they'll just develope their own. Of course, unfortuniately, M$ will have to get involved in order to set a standard for everyone to follow. While the game writers can, and do tweak their games a little for either ATI or Nvidia, their games still have to run on both companies products. That's why you need a common set of standards for both to follow, thus M$'s DirectX.

While we are all talking here about the next generation of video cards, let's face it, just like the CPU on our motherboards, the GPU is not even nearly utilized to it's full potential when you see the demo's that come out from ATI and Nvidia and what we get from the game industry.
 
Brent_Justice said:
I disagree with this. GPU's programmability today is very flexible and lends its way to the ability to do other things besides 3D graphics acceleration. If used correctly GPU's can do many specific things many times faster than a CPU, such as Physics.
Yes but as you said before the capabilities of Ageia's new PPU will override GPU physics rendering.
 
GFreeman9 said:
Yes but as you said before the capabilities of Ageia's new PPU will override GPU physics rendering.

Yeah, a 300 million (almost) transistor processor built to do one job only is going to do that job better than anything else.
 
GoHack said:
While we are all talking here about the next generation of video cards, let's face it, just like the CPU on our motherboards, the GPU is not even nearly utilized to it's full potential when you see the demo's that come out from ATI and Nvidia and what we get from the game industry.

Yes, there is a problem in the game industry of lazy codewriters. They don't optimize things well, but you'd think that if the GPU wasn't utilized fully, it wouldn't be the bottleneck in modern games. It very much is, so I think that the game writers must be doing at least somewhat of a decent job.
 
It's great having 32 gajillion completely independant physic using objects or whatever with these new cards but it also means a lot more rendering power will be needed no?
 
tornadotsunamilife said:
It's great having 32 gajillion completely independant physic using objects or whatever with these new cards but it also means a lot more rendering power will be needed no?


no same amout of stuff is being rendered it just means your interaction with the envroment is more detailed
 
LOL I promised myself I'd wait a bit before jumping oon the physics card add-on bandwagon but if games start coming out soon with support I will be very tempted. Anyone kow if DX10 is going to incorporate some kind of physics API?

As for the future of videocards I think DX10 (as whole not just the D3D part) will be a big step forwards and I hope nVidia and ATi have the hardware to do it justice
 
I'm hoping ATi will restart their liquid metal cooling project (65x more effective than water! :eek:). The company they were working with on this pulled the plug, b/c they said it would be too expensive to use liquid metal cooling in video cards
 
kirbyrj said:
And at that point, I'll probably have to get a Xbox 360/PS3 because I'm not going to pay out the nose just to game :mad: . Actually, if they had a keyboard/mouse for 360, I might pick one of them up right now ;)...I just can't play FPS games with a joystick.


I am going to console gaming for everything other than RTS's. I am sick of paying high prices for a card that can't even fun F.E.A.R. at 1600x1200 with everthing on high and full AA and AF.
 
Anyone kow if DX10 is going to incorporate some kind of physics API?

No. D3D10 is purely graphics.

As for the future of videocards I think DX10 (as whole not just the D3D part) will be a big step forwards and I hope nVidia and ATi have the hardware to do it justice

The D3D part IS the whole. The term "DX10" isn't even supposed to exist according to Microsoft. The only update will be Direct3D10. All of the other DirectX APIs have either been deprecated (e.g. Show, Play, Music) or don't need to be updated (e.g. Sound, Input).
 
Russ said:
Yes, there is a problem in the game industry of lazy codewriters. They don't optimize things well, but you'd think that if the GPU wasn't utilized fully, it wouldn't be the bottleneck in modern games. It very much is, so I think that the game writers must be doing at least somewhat of a decent job.

Isn't it having more memory on the video cards rather than the GPU itself?

The talk is having the need for 512 meg for the next generation of video cards. A recommended 2 gigs for the next M$ OS, Vista, itself.

Big problem w/writing programs, when it comes to memory, the more memory you have to work with, the sloppier the programming becomes. Notes, sub-program ideas, you name it, even stupid group pictures, are left into the final product.

Then there is the time factor too. Time means money, so they can't spend the time squeezing the programs into less memory, cleaning up their notes and dead end sub-programs, and so don't. What's written for a 256 meg video card most likely could be written to use a 128 meg, but that would take time.

Maybe more efficent compilers that takes memory conservation into account? Better program management, where a programmer is only allocated a set amount of memory? Better trained programmers, such that there's more to programming than just wrting code.
 
GFreeman9 said:
Yes but as you said before the capabilities of Ageia's new PPU will override GPU physics rendering.

I said it would be faster than dual core CPU's. I don't know about overriding GPU's. It is all up to the game developers and which route they go with. For physics rendering performance goes like this from slowest to fastest: CPU < GPU < Dedicated PPU

Using the GPU would still be many times faster than what the CPU's can do today.
 
i guess this is naive but aegia always struck me as " look, if we can find some kind of deficiency, something games wont even support for a few years, we can invent a product to correct this deficiency, pr the hell out of it, force game makers to address this deficiency, and then force people to buy our product because they will feel like they are lacking something in their lives"
or something.
 
Back
Top