AMD 7000 series specs leaked

If you read his sig,
yea can't see sigs

I wouldn't want less than 3GB if I were running three monitors also.
its not so much the number of monitors its the resolution that matters, 3GB of VRAM won't do you much good since no game out there really needs it unless you're running at some insane resolution that leaves you GPU limited anyways

I'm pretty confident XDR2 is confirmed.
its just the same rumors getting posted around, that isn't confirmation
 
The source you posted has Jan-Feb for the 7900 series and Mar-Apr for the 7800 series. Again, it doesn't make sense to release the cheaper / mass market cards first if 28nm yields are low.

Makes perfect sense to me...
Sell what you can actually make, then when yields improve enough to make the bigger cards, start selling those...
 
If its at 384bit bus it might be worth it to upgrade from older gen crossfires. The drivers lately have disappointed me though. Its discouraging to see such slow updates and fixes for games.
 
If anything, this might lend credence towards XDR2:

AMD is a patent licensee of Rambus technologies according to SEC filings that I've just read. AMD along with Samsung, Fujitsu, and Panasonic, accounted for 10% or more of Rambus' revenue in 2010. Samsung is a partner of Rambus in building memory and IC components.

Since I can't find anything about products using the new memory, AMD may be the first company to use XDR2 if the rumors are true.

Now, let's say the rumors on XDR2 is true and rumors stating that the GPU will have interfaces to both XDR2 and GDDR5. I see it go two ways:

March to April 2012 (if going by the last S|A rumors)
Screenshot-2011-11-27_16.09.28.png

If AMD goes with XDR2, I'd see that as the more likely candidate. AMD was the first to use GDDR5 in their GPUs before Nvidia did, so I'm thinking AMD wants to be first in using XDR2. It also uses approximately 30% less power than GDDR5. We could see the 7900-series using less power than the 6900-series. And, chips are available from 512 Mbit to 4 Gbit sizes, with 1 Gbit for the initial chips.

If AMD goes with GDDR5, it'll be easier from a production standpoint-- the GDDR5 chips are already there and there are 2 Gbit chips available. AMD can use 8 (256-bit) or 12 (384-bit) chips for 2048 MB and 3072 MB of memory respectively. It'll also be more cost effective (for consumers) since it'd be cheaper than what XDR2 will probably cost given it's new technology.

Either way, it can go both ways. The XDR2 memory controller is backwards compatible to GDDR5 and DDR3. AMD will just have to decide what's more cost effective or most efficient and higher performing-- GDDR5 or XDR2 because the controller can use either type of memory. If I was deciding, I'd go with XDR2 on the next cards even though the cost effective aspect goes out the window.

And if we believe anything S|A puts out, 384-bit memory bus means the 7950 and 7970 are 3 GB of memory each, with 6 GB for the dual chip 7990. :eek:

Everything else will probably look like the following:

Jan. to Feb. 2012
40nm VLIW4 6970 -> 28nm VLIW4 7870 w/ 2048 MB GDDR5
40nm VLIW4 6950 -> 28nm VLIW4 7850 w/ 1024 or 2048 MB GDDR5​

Future (?)

[28nm VLIW4 7790 (?) ... if going by AMD's previous models.]

40nm VLIW4 6870 -> 28nm VLIW4 7770 w/ 2048 MB GDDR5
40nm VLIW4 6850 -> 28nm VLIW4 7750 w/ 2048 MB GDDR5

40nm VLIW5 6770 (5770) & 6750 (5750) -> Deprecated

28nm VLIW4 7670 & 7650 w/ 1024 MB GDDR5​
 
amd has bought liscences to Rambus IP for all sorts of stuff, intel and others do the same thing, doesn't mean they're gonna release a product that uses it

hell amd bought patents for zram which was supposed to be badass years ago and did nothing with it
 
amd has bought liscences to Rambus IP for all sorts of stuff, intel and others do the same thing, doesn't mean they're gonna release a product that uses it

hell amd bought patents for zram which was supposed to be badass years ago and did nothing with it

You have a good point.
 
Generally the GPU improves faster than the VRAM and some people have been suggesting for a while that new tech is needed since eventually the GPU will end up bottlenecked. The thing is nV and AMD keep finding ways to squeeze more effective performance out of the same bandwidth through various on die caches and other software/hardware tricks so this hasn't been a problem. On top of this current tech (GDDR5) still has some significant headroom available (should scale to 7Ghz IIRC) and going to a 384 bit bus isn't too difficult or expensive to do with it either.

XDR2 or similar RAM probably isn't needed for another generation or 2 at the very least. If and when they do switch to something else it probably won't be from Rambus even if whatever they do eventually switch to is reliant on some Rambus IP. My WAG is they'll try to hold off switching to some other tech until it gets more cost effective to just put all the memory on a single package and tie it all together with a silicon interposer like what SA posted a while back. Note the HD8000 tag. B3D speculation was that particular sample is likely intended for a future console though.
 
According to Wiki, Hynix has the fastest 2 Gbit chip at 7 GHz and 224-ish GB/sec or higher. It's very comparable and competitive to XDR2. It is another option AMD could consider for the next GPU. At 256-bit or 384-bit data bus width, it's going to have a very high bandwidth anyway especially if the memory is clocked that high.

Rambus has already demonstrated 7.2 GHz XDR2 a few years ago in 2009, and they expect bandwidth to hit a terabyte per second out of the new memory and controller in the future.

As for that S|A link, some places think that's the new GPU for the next Xbox console since it's using a new way for the GPU memory called an interposer. This should allow for higher memory bandwidth. Other places think this is the new Trinity CPU or a future successor whereby the on-die GPU gets its own dedicated VRAM. Either way, it looks like AMD may have something impressive to demonstrate in the future.
 
Looks like we should hear some news Monday the 5th. Possible release date in January.

Plus, it looks like gddr5. I can't say I like Rambus.
 
i can't wait till it comes out so I can pick up another XFX 6950 2GB card for cheap i.e. sub 200
 
Good, a 3GB and a 384 bit memory bus is an epic win, though I'm a bit disappointed that ATI isn't going with XDR2 but oh well.
 
wut? You only need more than 1GB if you go over 1080p res, 2GB only becomes a factor in multi monitor gaming and even then its usually plenty.

I use around ~1400mb in Skyrim at maximum settings. Mods will only increase that. 2GB should really be the minimum anyone gets at 1080p or higher now imo.
 
The game will use more if you have it but doesn't need it unless you have higher resolutions than 1080p.
 
Good, a 3GB and a 384 bit memory bus is an epic win, though I'm a bit disappointed that ATI isn't going with XDR2 but oh well.

i think it was to good to be true that rambus would ever get it released. it would of been bad ass to see but i didn't have much hope for it with rambus's track record.
 
The game will use more if you have it but doesn't need it unless you have higher resolutions than 1080p.

The first part is true. The second part is absolutely false. Video memory has at least two uses: Frame buffer ( as you indicated), and texture cache. As texture resolution and number of textures in each map increase, more and more memory will be needed to limit on-the-fly texture loads. Precaching in video memory prevents the jerkiness presented with your card waits on either system memory or hard disk to transfer textures not already loaded. This is one reason that a dual 3870x2 system (the 512 MB per GPU version) has issues with newer games, even at low resolutions.
 
The source you posted has Jan-Feb for the 7900 series and Mar-Apr for the 7800 series. Again, it doesn't make sense to release the cheaper / mass market cards first if 28nm yields are low.

It does if those batches yield slightly imperfect chips suitable to the cut rate cards.
 
BF3 can consume more VRAM when there is available; however, this may not necessarily translate to more FPS.
 
If BF3 uses more memory on- card, then it is experiencing less time swapping textures, or is going to be able to upgrade quality options, if automatic detection and use of excess video memory is used by the program. Here is an example of how memory works here:

2 or more cards in SLI/Crossfire

Frame buffer is effectively increased, because each card is buffering fewer frames of less of each frame.

Texture memory is NOT increased, as ALL textures are sent to ALL cards. Yes, duplication is happening. It is the way it is done currently, and for the near-future, at least.

More cards is more processing, and more frame storage ( works with the more processing), but doesn't help quad 512 meg cards do anything about there being only 256 MB for textures (duplicated across all 4). So Skyrim, etc cannot have massive detailed textures, in this case.
 
If anything, this might lend credence towards XDR2:

AMD is a patent licensee of Rambus technologies according to SEC filings that I've just read. AMD along with Samsung, Fujitsu, and Panasonic, accounted for 10% or more of Rambus' revenue in 2010. Samsung is a partner of Rambus in building memory and IC components.

Since I can't find anything about products using the new memory, AMD may be the first company to use XDR2 if the rumors are true.

Now, let's say the rumors on XDR2 is true and rumors stating that the GPU will have interfaces to both XDR2 and GDDR5. I see it go two ways:

March to April 2012 (if going by the last S|A rumors)
Screenshot-2011-11-27_16.09.28.png

If AMD goes with XDR2, I'd see that as the more likely candidate. AMD was the first to use GDDR5 in their GPUs before Nvidia did, so I'm thinking AMD wants to be first in using XDR2. It also uses approximately 30% less power than GDDR5. We could see the 7900-series using less power than the 6900-series. And, chips are available from 512 Mbit to 4 Gbit sizes, with 1 Gbit for the initial chips.

If AMD goes with GDDR5, it'll be easier from a production standpoint-- the GDDR5 chips are already there and there are 2 Gbit chips available. AMD can use 8 (256-bit) or 12 (384-bit) chips for 2048 MB and 3072 MB of memory respectively. It'll also be more cost effective (for consumers) since it'd be cheaper than what XDR2 will probably cost given it's new technology.

Either way, it can go both ways. The XDR2 memory controller is backwards compatible to GDDR5 and DDR3. AMD will just have to decide what's more cost effective or most efficient and higher performing-- GDDR5 or XDR2 because the controller can use either type of memory. If I was deciding, I'd go with XDR2 on the next cards even though the cost effective aspect goes out the window.

And if we believe anything S|A puts out, 384-bit memory bus means the 7950 and 7970 are 3 GB of memory each, with 6 GB for the dual chip 7990. :eek:

Everything else will probably look like the following:

Jan. to Feb. 2012
40nm VLIW4 6970 -> 28nm VLIW4 7870 w/ 2048 MB GDDR5
40nm VLIW4 6950 -> 28nm VLIW4 7850 w/ 1024 or 2048 MB GDDR5​

Future (?)

[28nm VLIW4 7790 (?) ... if going by AMD's previous models.]

40nm VLIW4 6870 -> 28nm VLIW4 7770 w/ 2048 MB GDDR5
40nm VLIW4 6850 -> 28nm VLIW4 7750 w/ 2048 MB GDDR5

40nm VLIW5 6770 (5770) & 6750 (5750) -> Deprecated

28nm VLIW4 7670 & 7650 w/ 1024 MB GDDR5​

AMD and Samsung have partnered many times before - especially on GPU memory (Samsung has been the backup, with Hynix, also a RAMBUS licensee, as the primary, since the HD3000 series).

HD6750/6770 going away - sensible, as the HD6750/70 are rejiggered HD5750/70; however, what happens to HD6790 (re-jiggered/corrected HD5830)?

HD7650/7670 - so much for HD6670 (all versions); the different RAM loadouts (GDDR3 vs. GDDR5) have only created confusion - however, what's the difference between the two?

HD6850 - officially a conundrum, as this is now the midrange sweet-spot among aux-powered AMD GPUs, and I notice that there is no HD77xx (or HD78xx either) so far.
 
The second part is absolutely false. .
There are plenty of folks running the game and many many other new games just fine at 1080p resolution with "just" 1GB VRAM. Quick n' dirty google's will show you this so I don't even know why you're bothering to bring this up. You go above 1080p then yea you've got problems but for most this isn't an issue at all.

If BF3 uses more memory on- card, then it is experiencing less time swapping textures, or is going to be able to upgrade quality options, if automatic detection and use of excess video memory is used by the program.
Oooooh OK you're gonna be pedantic about it and actually ignore what people are experiencing in game (read: no noticeable change at all in performance), nvm then.
 
6790 is a derivative of the 6850/6870, not a rebadged 5830.

I was more referring to the HD5830 being a derivative of the HD5850 (which, somehow went wrong, as the HD5770 would wax it), which is why I added *corrected* - the HD6790 is what the HD5830 should have been performance-wise.

I'll likely be sitting out the HD7K series once I move to HD6850 (no compelling reason to go there, as HD6850 performance teamed with i5-K is plenty on a 23" dingle display).
 
Oooooh OK you're gonna be pedantic about it and actually ignore what people are experiencing in game (read: no noticeable change at all in performance), nvm then.

Preferring engineering facts to anecdotes isn't being pedantic.
 
Haven't we had enough "leaks" over the years with different GPUs (Cayman having 1920 SP anyone?) and with this one to accept that: the only legit specs are the ones released by AMD on launch day.

That's just my take, if you think these "leaks" are credible, I'm open to your ideas.

But that being said, 264GB memory bandwidth and 2048 SP will be a 50% and 33% gain in memory and processing respectively, not too shabby.
 
Yeah i remember the 1920 cayman craze lol
leaked benchies and speculation is what keeps us enthusiasts alive though
 
Haven't we had enough "leaks" over the years with different GPUs (Cayman having 1920 SP anyone?) and with this one to accept that: the only legit specs are the ones released by AMD on launch day.

That's just my take, if you think these "leaks" are credible, I'm open to your ideas.

But that being said, 264GB memory bandwidth and 2048 SP will be a 50% and 33% gain in memory and processing respectively, not too shabby.

Very shabby if you consider that an increase of 70% more transistors only gain you 33% increase in Stream Processors, TMU's, ROP's and so on.
That is, if these slides and numbers come even close to the truth in the first place, which is very doubtful.

I'm not ripping into you XacTactX, just saying we all need to take a step back and not take things for true just because it's posted on the net. I mean, the net has never been wrong before, right? ;)
 
Very shabby if you consider that an increase of 70% more transistors only gain you 33% increase in Stream Processors, TMU's, ROP's and so on.
That is, if these slides and numbers come even close to the truth in the first place, which is very doubtful.

I'm not ripping into you XacTactX, just saying we all need to take a step back and not take things for true just because it's posted on the net. I mean, the net has never been wrong before, right? ;)

you do realize this is a new architecture right? It is not the same stream processors, it is a brand new GCN architecture. You can read about it on anandtech if you want. So until all the details come out about the chip we cant really talk about the transistors gain vs stream processor gain. There is a lot of new stuff in it than just increased number of stream processors, even they are not the same.
 
those results dont even look bad lol. id be fine with it being 50% faster than 6970
 
those results dont even look bad lol. id be fine with it being 50% faster than 6970

I want to be excited about this card, even if it's only 50% faster than a 6970. However, the fact that there is no software out there (or will there be for a while), and the $500-550 price will likely compel me to sit this one out. I just can't justify spending money on this card to play Skyrim and BF3 when my 6970s already push these games at high FPS with the settings floored.
 
I want to be excited about this card, even if it's only 50% faster than a 6970. However, the fact that there is no software out there (or will there be for a while), and the $500-550 price will likely compel me to sit this one out. I just can't justify spending money on this card to play Skyrim and BF3 when my 6970s already push these games at high FPS with the settings floored.

how very un-[H] of you
 
Back
Top