Nvidia GT300 - Some info

Niceone

Gawd
Joined
Jan 14, 2008
Messages
780
http://translate.google.com/transla...ware-infos.com/news.php?news=2663&sl=de&tl=en

http://www.hardware-infos.com/news.php?news=2663

----
-Arrives Q4/2009 to compete against RV870
-Will be made with 40nm Process
-Has DirectX 11 support
-Cuda 3.0 support
-Doesn't have the conventional SIMD (Single Instruction Multiple Data) processing units --> Will have MIMD(Multiple Instruction Multiple Data)-units..so the cluster structure it self is dynamic.
-Would be new architecture like Geforce6-chips or G80 was.
----
 
Would be new architecture like Geforce6-chips or G80 was.
Doesn't nVidia/sites claime fhis for every single new videocard when it's in the rumor stage? LOL
 
Good thing to know it will support DX11. I wonder what the memory bus is, hopefully it will be a 512 bit on GDDR5.
 
Doesn't nVidia/sites claime fhis for every single new videocard when it's in the rumor stage? LOL
Nah I think this time they'll get it right with the GT300. Nvidia has made a lot of marginal losses, I know they're gonna want to fix this with the GT300. 40nm and DX11 should already be enough to draw in a lot of consumers. I hope it's gonna be a tough battle between Nvidia and ATI so we can see some nice price drops again.

At the time it's out, we'll be able to do Win7 + DX11 + i7 + GT300.... Imagine that.
 
I think it will be interesting to see the GT300 in action. Perhaps this is the new architecture they were talking about when the G100 was rumoured.

Nvidia have traditionally been good with multithread support and this looks like a good step in the right direction. :)

ATI 4000 series was one of the best things that happened to us consumers, bringing the price extremely much down. Lets hope the competition is just as stiff when the GT300 arrives for any card we want to buy! :D
 
Good thing to know it will support DX11. I wonder what the memory bus is, hopefully it will be a 512 bit on GDDR5.
GDDR5+512-bit should still be a huge overkill. With 256-bit+GDDR5 you can get 224GB/s bandwidht at the moment. That's 95% higher than HD4870 X2's or 60% higher than GTX 280's. That should be enough.
 
That's using the most expensive GDDR5. I think most people expect at least a doubling of performance at 40nm. A 256-bit bus isn't enough to provide a doubling of bandwidth too should they need it. We can probably look forward to 200GB/s at most on a 256 wide bus next generation.
 
CUDA 3.0? With 2.1 still in Beta they'd really have to rush it out of the door, even with the short development cycles on CUDA :)

I'm really interested in the MIMD architecture, though. It'd be the first time ever that a consumer product would use it and I'd love to hear nVidia's thoughts on why they went with it, as well as what it'd mean for CUDA/PhysX.

Colour me interested :)
 
That's using the most expensive GDDR5. I think most people expect at least a doubling of performance at 40nm. A 256-bit bus isn't enough to provide a doubling of bandwidth too should they need it. We can probably look forward to 200GB/s at most on a 256 wide bus next generation.
Should still be cheaper than it's alternative; 512-bit bus and huge chip.
 
I think PowerVR SGX may already be MIMD. But yeah if Nvidia pulls it off on a large scale that would be impressive. Though they may end up in the same boat they are in now if AMD continues to pack in as many ALU's as they can and forego additional flexibility. CUDA apps would love MIMD but most game shaders aren't branchy enough to take advantage of a MIMD architecture just yet.
 
Should still be cheaper than it's alternative; 512-bit bus and huge chip.

Nobody has any idea what a chip costs vs expensive GDDR5. However, if the chip is going to be large anyway then you might as well go for 512.

A 512-bit bus puts performance in the hands of the chip designer, a 256-bit bus makes them dependent on the memory suppliers for pricing and availability. The unavailability of GDDR5 was one factor in the delay of the 1GB 4870 cards.

With a 512-bit bus you can access the large supply of 4GHz GDDR5 out there and put up big numbers. With a smaller bus you've got to shell out for 6GHz+ chips and who knows what sorta volume or pricing those will have. There's no way for you to claim that expensive GDDR5 is cheaper than a 512-bit bus without actual pricing info or architectural details.
 
I would expect a 384-bit bus before a 512 one. Overkill is good at times, but when it comes to laying down PCB traces, less generally is more (higher speeds, less cross-talk, easier routing, more space for other traces/components, ad nauseam).
 
Yeah 384-bit would make sense from a layout perspective. But then your baseline framebuffer becomes 1.5GB GDDR5. Everything has a cost tradeoff and only Nvidia knows which approach gives the most bang for the buck.
 
Nobody has any idea what a chip costs vs expensive GDDR5. However, if the chip is going to be large anyway then you might as well go for 512.

A 512-bit bus puts performance in the hands of the chip designer, a 256-bit bus makes them dependent on the memory suppliers for pricing and availability. The unavailability of GDDR5 was one factor in the delay of the 1GB 4870 cards.

With a 512-bit bus you can access the large supply of 4GHz GDDR5 out there and put up big numbers. With a smaller bus you've got to shell out for 6GHz+ chips and who knows what sorta volume or pricing those will have. There's no way for you to claim that expensive GDDR5 is cheaper than a 512-bit bus without actual pricing info or architectural details.
Then you would have expensive GDDR5 memory with expensive chip..and memory bandwidht overkill. Thing is that with 512-bit bus it's not just the chip, but also the PCB will be more complex.

Also GDDR5 is going down in prices and it's availability is getting better. At this moment only AMD's enthusiast and performance cards use them, but RV740 will also use GDDR5 and this is mainstream chip.
---
Enthusiast - 384-bit - 1536MB
Performance H - 384-bit - 768MB or 1536MB
Performance L - 320-bit - 640MB (yield-saver card)
 
GDDR5+512-bit should still be a huge overkill. With 256-bit+GDDR5 you can get 224GB/s bandwidht at the moment. That's 95% higher than HD4870 X2's or 60% higher than GTX 280's. That should be enough.

I dont think its overkill.
A lot of memory bandwidth is needed to feed all the parallel streams simultaneously and allow them to process data in ram.

It has already been demonstrated with CPUs that the gains moving from quad core to 8, 8 to 16 etc results in a massive loss in potential processing power simply due to the architecture of the memory system on the PC.
There was even a loss in performance demonstrated moving from 8 to 16 cores!
There are too many bottleneck points and also the memory bandwidth is not enough to keep all the processors occupied.

With the huge amount of stream processors, a lot of memory bandwidth is required.
This could be mitigated if each pipe has its own cache and can perform complex operations within that cache workspace as then it could work away without using memory bandwidth all the time.
That doesnt appear to be the architecture though so the memory takes the full hit.
 
Then you would have expensive GDDR5 memory with expensive chip..and memory bandwidht overkill. Thing is that with 512-bit bus it's not just the chip, but also the PCB will be more complex.

Do you know the price and availability difference between 4Ghz and 6Ghz GDDR5? Until you do you can't claim to know which option would be more expensive for a given amount of bandwidth.
 
Do you know the price and availability difference between 4Ghz and 6Ghz GDDR5? Until you do you can't claim to know which option would be more expensive for a given amount of bandwidth.
It's hard to predict what's the situation in Q4/2009..theres few months before that. 512-bit bus would still be expensive then because of more complex PCB and bigger GPU.

Remember how these current GDDR5 chips were introduced November 2007 and it took 7 months before there was first commercial product using it. These 7GHz chips were introduced November 2008 and there is about 10-11 months before this GT300 would be out. The next generation GDDR5 memory would also be out pretty soon after that...which will double the speed ;)

Hynix will start volume production in one or two months indicating that availability shouldn't be problem at Q4/2009
 
I'm a little sad about the Q4/09, because I thought nvidia's new cards were coming out in a couple of months. Then again, I didn't expect a DX11 card so soon.
 
I'm a little sad about the Q4/09, because I thought nvidia's new cards were coming out in a couple of months. Then again, I didn't expect a DX11 card so soon.

Yeah, I had expected it a bit sooner too, but I guess that NVidia isn't done yet with the GTX 200 series :)

Also, since DX 11 will be available on Windows 7 only, I expect the same kind of rapid uptake as with 10, i.e. sluggish as a glacier :) DX 9 will keep a foothold for a bit longer, I suspect.
 
I thought DX 11 was going to be available on both vista and windows 7?

Just checked Wikipedia, you're right. That shall teach me to not take whatever they write on tech blogs for truth :)
 
wow... i wasn't aware that MS was abandoning Vista so quickly! It only has 1 DX revision to it's name!
 
I'm a little sad about the Q4/09, because I thought nvidia's new cards were coming out in a couple of months. Then again, I didn't expect a DX11 card so soon.
Well there's GT212 on it's way; 384SP, DX10.1 and is only about half of the GT200's size
 
wow... i wasn't aware that MS was abandoning Vista so quickly! It only has 1 DX revision to it's name!
DX 10 & 11 will both be available on Vista and Win7 :)

Well there's GT212 on it's way; 384SP, DX10.1 and is only about half of the GT200's size

There are rumours that 212 will be canceled, but if GT300 won't be out until the end of this year, it would seem odd.

*waits for official announcements*

:)
 
There are rumours that 212 will be canceled, but if GT300 won't be out until the end of this year, it would seem odd.

*waits for official announcements*

:)
Well it was Charlie from Inquirer who wrote that GT212 would be cancelled..so that rumour can be forgotten..
 
Well it was Charlie from Inquirer who wrote that GT212 would be cancelled..so that rumour can be forgotten..

Charlie also wrote that the dual gtx200 core variant will never see the light of day...and we know how that stacked up against reality.
 
Wow I had expected the GTX-380 to come out in around June, one year after the GTX-280. Why would they wait so long >?
 
Wow I had expected the GTX-380 to come out in around June, one year after the GTX-280. Why would they wait so long >?

In this economy I don't think there are that many people eager to snatch up $$$ gtx380's. Might as well milk as much as they can out of G2xx and release the new flag ship at top price when people feel easier about spending, which is Q4/Xmas.
 
Would be new architecture

This would not be surprising!
The G80 architecture has been around for long enough and NVIDIA needs a new approach if they want to remain competitive.

I would not bet on the 40nm process. Usually, NVIDIA likes to test new process on current architecture. In fact, NVIDIA has to approaches with chip design:
1) new process + old arcitecture
2) old process + new architecture.
I think that NVIDIA may re-spin some existing GPU on the 40nm process but it may not happen before next summer.
40nm and Q4 2009 seems a bit too aggressive. But there is nothing wrong with dreaming...;)
 
Warhead:

-4870x2 stock 8.561.3
-All max 2xAA
-1920x1200

24+FPS average

With an Ultra High config my GTX 280 SC (702/1526/1227) struggles to even run 4xAA at 1680x1050, despite that it should be able to do that no problem.
 
With an Ultra High config my GTX 280 SC (702/1526/1227) struggles to even run 4xAA at 1680x1050, despite that it should be able to do that no problem.

Well the latest rivatuner chart both GPU's usage and they are all pegged at 99% when I run warhead. So the performance is from the X2 scaling doing its job. And Kyle bumped the AA to 4X. In those cases a single 280 doesn't really stand a chance.

1231400723i38zwHFyEp_4_1.gif
 
CUDA 3.0?

2.1 is hardly ready...how are they gonna pull that off?

Although im really lovin this CUDA stuff.
 
CUDA 3.0?

2.1 is hardly ready...how are they gonna pull that off?

Although im really lovin this CUDA stuff.

Well, if they were to go straight from 2.1 to 3.0 it'd be possible, perhaps squeeze a quick 2.2 release in there.

MIMD would rock, though :D
 
Ok..so that article what I used in opening post turned out to be full of..it. For example G80 is already MIMD..
 
Back
Top