DDR3 vs. GDDR5

Please tell me Exactly how XBL online was better than PSN? I've used both and Many games these days are P2P anyway which aren't different because they depend on the player hosting the match. Dedicated server games aren't exclusive to Microsoft, again it depends on the game, BF3 for example offers this option on both consoles.

People who think XBL Gold is so good that it deserves the price over Free PSN are delusional. I personally HATE that PS4 MP will be behind a paywall, but at least PSN Plus more than makes up for it for Me (not for some people though who have no need for it, that's why I think putting MP behind a paywall is still bullshit for any device).

PS3 had hardware limitations to make the OS sluggish and not include stuff such as Cross Game Chat, however this will change with PS4 because it's already planned ahead in this aspect very clearly.
 
With all the bad press surrounding the One, why on Earth would Microsoft be withholding positive information (that their GPU is better than expected)? I also have not yet seen a statement by Microsoft denying that the information out there is correct. I would believe that Microsoft would want to quickly dismiss rumors about their console which is the behavior that they have exhibited recently.

The reason they have not yet admitted to the GPU specs is because they are the real specs and they don't want to confirm more reasons to hate on the console.

Except Microsoft has let people speculate on their console for over a month before they described how the DRM would ACTUALLY work, and then a few weeks after that cancelled them altogether. And before that, people were speculating what they meant by "Always On" for months. They let people speculate without answering anything because there's a lot of BS to shoot down.

You could also speculate that a lot of developers that are getting the newest dev kits are seeing a performance increase of what they were expecting, because the GPU that was speculated wasn't correct.

Speculation is a double edged sword. Technically all of it is correct, and simultaneously, none of it is correct.
 
Also if GDDR5 is so amazing why don't normal pc's use it? (yes I know amd is coming out with their unified platform).
GDDR5 cost more and makes more heat. In most PC scenarios DDR3 provides enough bandwidth for your OS and applications. Sure, benchmarks would be better but if the applications don't need it, it won't be a benefit. So using GDDR5 in a PC for normal operations is completely overkill.

Graphics cards on the other hand need the higher bandwidth that GDDR5 provides. Again, look at DDR3 vs GDDR5 benchmarks, there's 5-20% improvement simply going to GDDR5 on a GPU.

Going back to memory speed on a PC, the benefit of going from 1333 DDR3 to 2100 DDR3 on a PC is, for the most part, negligible outside of synthetic benchmarks. Now, if the system is using integrated graphics (which shares system memory) there is a performance boost going to faster memory. So going from DDR3 to GDDR5 would further improve that. Manufacturers don't provide GDDR5 memory for their integrated graphics because people who need the performance are going to use discrete cards that have GDDR on board.

Intel's latest Haswell does feature a high end model that includes SRAM buffer (I think 128MB) to help improve the integrated GPU performance. This buffer also acts as L4 cache if the integrated GPU is not being used. This is similar to the technology that Microsoft is using in the One. Of course this method is not optimal for gaming, but Intel isn't going to force a large cost of GDDR memory for something that few people will use.

So now back to PS4. Normal OS functions and many gaming functions probably won't see a benefit of the GDDR5 memory. However, the graphics rendering will see the benefit. Sony could of went with 4GB of DDR3 for the OS and applications and another 4GB of GDDR5 for the GPU, but that would complicate things on the development front. Going for a unified memory architecture where both OS and GPU can address the entire 8GB gives developers a ton of flexibility in what they can do. One requires developers to meticulously be aware of how much memory is being cached into SRAM and to have the game coded to be constantly swapping data between SRAM and the slower DDR3. Couple this with a slower GPU and you've created a inferior product that will show it's age after a couple years.
 
Honestly, the Unified Memory space that both consoles will be using with PC-based architecture will probably be responsible for more efficiency increases than just the type of RAM being used. That alone will cut down on the contents of the system RAM being copied over to the graphics RAM and should eliminate several steps overall.
 
Honestly, the Unified Memory space that both consoles will be using with PC-based architecture will probably be responsible for more efficiency increases than just the type of RAM being used. That alone will cut down on the contents of the system RAM being copied over to the graphics RAM and should eliminate several steps overall.

Care to explain how this is any different than unified memory that has always been used in bargain basement laptops which sucks.
 
Care to explain how this is any different than unified memory that has always been used in bargain basement laptops which sucks.
... because integrated graphics on laptops are using normal DDR memory and have a bloated Windows API on top of it.
 
Care to explain how this is any different than unified memory that has always been used in bargain basement laptops which sucks.

I got a better idea, I'll explain how standard PC memory works instead.

Let's say you have 16 gigs of DDR3 system RAM, and a GPU witrh 2 gigs of GDDR5 RAM. The RAM isn't shared at all, so anything the GPU requires has to be copied from system RAM into the dedicated GPU RAM. Therefore, the RAM on the video card is basically copied from the system RAM, meaning memory is wasted between the two.

Shared Memory (which you see on shitty laptops) means that the RAM is partitioned off into system memory and GPU memory, so even THEN the RAM is still copied from the system memory partition to the GPU memory partition, since each portion is still not accessible outside of their designated use.

Unified Memory means the entire block of RAM is accessible to the CPU and GPU. There's no partitioning, and no copying. The GPU and CPU are both able to take what they need and share memory without duplicating it, and without wasting bandwidth. It's a lot more efficient.
 
... because integrated graphics on laptops are using normal DDR memory and have about 1/2 the amount of shaders that the PS4 has. The chip used in PS4 and One is custom and includes a much larger GPU than you can normally find integrated. For example, the latest and greatest AMD integrated graphics you can get is the Radeon HD 7970M. This 7970M is slightly more powerful than the PS4, yet with bloated Windows DX API and DDR3 memory, the performance is less than you're going to get on PS4.

Way to completely miss the question.
 
So... does anyone even read the words around these articles? It clearly states that Microsoft never announced the GPU for the Xbox One. Neither has AMD. In fact, when I trace everything back to the article where people claim they "heard" the Xbox One's GPU, it was a single article about AMD putting the HD 7750 on the market, and the article author assuming it had to be the GPU for the "Durango" because of the price point.

I'd kinda like more concrete info before I shit on anything, although it would explain why Dead Rising 3 didn't look very impressive.

The specs for the Xbox were known the moment MS said how many teraflops it was capable of. They don't need to release hard specs as we know X amount of teraflops = X graphics card or near to.
 
I got a better idea, I'll explain how standard PC memory works instead.

Let's say you have 16 gigs of DDR3 system RAM, and a GPU witrh 2 gigs of GDDR5 RAM. The RAM isn't shared at all, so anything the GPU requires has to be copied from system RAM into the dedicated GPU RAM. Therefore, the RAM on the video card is basically copied from the system RAM, meaning memory is wasted between the two.

Shared Memory (which you see on shitty laptops) means that the RAM is partitioned off into system memory and GPU memory, so even THEN the RAM is still copied from the system memory partition to the GPU memory partition, since each portion is still not accessible outside of their designated use.

Unified Memory means the entire block of RAM is accessible to the CPU and GPU. There's no partitioning, and no copying. The GPU and CPU are both able to take what they need and share memory without duplicating it, and without wasting bandwidth. It's a lot more efficient.

This ^
 
So how much of the ram needs to be duplicated between the GPU and the CPU memory? IE you would not think most textures which take up most of the RAM would need to be accessed by the CPU.

I guess I am having a hard time figuring out what is missing in the equation, if shared memory is the bees knees why the heck have they not don't that a long time ago especially with laptops and certainly with consoles which always had free reign on the hardware?
 
in a year or two its not going to matter as the next version of DX (and the next "gen" of GFX cards from Nvidia / AMD are going to be allow the it to access regular System Memory and not just VRAM. It still wont be as efficent as the PS4 (due to limitations of the PCIe bus) but it will be a big improvement
 
I guess I am having a hard time figuring out what is missing in the equation, if shared memory is the bees knees why the heck have they not don't that a long time ago especially with laptops and certainly with consoles which always had free reign on the hardware?

If I recall correctly,

PS2 used shared memory, and PS3 was intended to as well, but the nvidia hardware was shoehorned in at the last minute when try realized the cell was not good enough alone.

360 uses shared memory. Original Xbox doesn't, but it was quickly designed from PC components.

This is the direction the console market has been heading.
 
I'm glad MS isn't implementing unified memory for human brains... I really don't want to know what you did last night, I'd quickly run out of Magic Erasers as I compulsively scrub down the walls of my house every night in an attempt to feel like mr clean!!!!
 
If I recall correctly,

PS2 used shared memory, and PS3 was intended to as well, but the nvidia hardware was shoehorned in at the last minute when try realized the cell was not good enough alone.

360 uses shared memory. Original Xbox doesn't, but it was quickly designed from PC components.

This is the direction the console market has been heading.

Actually the idea was to use two Cell processors but the yields were so horrible and the were having such a tough time switching to .18 nm initially that there simply wouldnt be enough of them so they switched gears, cranked out .25 nm chips (which were originally only meant for Dev units) that were super expensive for Sony to make, make it with only one Cell with the Nvidia hardware shoehorned in at the last minute.
 
OK so consoles are using shared memory and have for over a decade, once again the question becomes so why don't PCs? And it appears it really didn't help any of those consoles that much. Also I still want to know if the majority of textures are actually needed by the CPU. You could imagine a case where a small amount of memory is needed to be duplicated and in most cases its way faster to just keep them separate. IE sure the memory bandwidth of GDDR5 is high but is it higher than GDDR5 + DDR3, probably not. So ultimately it comes down to how much memory is copied. And it could just as easily be hype as the console makers always put out to over the shadow the fact that its really just a cost savings to not need X + Y ram and the complication of that on a mobo that is going to be all soldered in most cases.

My guess is that in practice for a system with limited RAM Shared 8 GB is more flexible and better than 4GB + 4GB, but I doubt its better than 2GB + 8GB.
 
Last edited:
OK so consoles are using shared memory and have for over a decade, once again the question becomes so why don't PCs? And it appears it really didn't help any of those consoles that much. Also I still want to know if the majority of textures are actually needed by the CPU. You could imagine a case where a small amount of memory is needed to be duplicated and in most cases its way faster to just keep them separate. IE sure the memory bandwidth of GDDR5 is high but is it higher than GDDR5 + DDR3, probably not. So ultimately it comes down to how much memory is copied. And it could just as easily be hype as the console makers always put out to over the shadow the fact that its really just a cost savings to not need X + Y ram and the complication of that on a mobo that is going to be all soldered in most cases.

My guess is that in practice for a system with limited RAM Shared 8 GB is more flexible and better than 4GB + 4GB, but I doubt its better than 2GB + 8GB.

PCs don't use a true shared memory becuase the OS are not truely designed for it.

That's it
 
PCs don't use a true shared memory becuase the OS are not truely designed for it.

That's it

The hardware would also have to be designed for it. You're looking at getting video card manufacturers and motherboard/chipset manufacturers to cooperate. And since your graphics memory lives off-card you need a ridiculously wide and fast bus to access it.

Then there's the cost of memory for a high-end system. Used to having 12 gigs of DDR3 and 4 of GDDR5? To get the same graphics performance in games you'd need at least 12 gigs of GDDR5.

Unified memory doesn't make sense if you want low-cost high-performance customizable systems. That's why it's only used in laptops with lower-end parts.
 
The hardware would also have to be designed for it. You're looking at getting video card manufacturers and motherboard/chipset manufacturers to cooperate. And since your graphics memory lives off-card you need a ridiculously wide and fast bus to access it.

Then there's the cost of memory for a high-end system. Used to having 12 gigs of DDR3 and 4 of GDDR5? To get the same graphics performance in games you'd need at least 12 gigs of GDDR5.

Unified memory doesn't make sense if you want low-cost high-performance customizable systems. That's why it's only used in laptops with lower-end parts.

The OS has to support it in order for it to work in a PC and MS and the 'nix guys do not have any support for a Unified Memory System so it was never really pushed. This idea has been around since the PC was first created
 
OK so consoles are using shared memory and have for over a decade, once again the question becomes so why don't PCs? And it appears it really didn't help any of those consoles that much. Also I still want to know if the majority of textures are actually needed by the CPU. You could imagine a case where a small amount of memory is needed to be duplicated and in most cases its way faster to just keep them separate. IE sure the memory bandwidth of GDDR5 is high but is it higher than GDDR5 + DDR3, probably not. So ultimately it comes down to how much memory is copied. And it could just as easily be hype as the console makers always put out to over the shadow the fact that its really just a cost savings to not need X + Y ram and the complication of that on a mobo that is going to be all soldered in most cases.

My guess is that in practice for a system with limited RAM Shared 8 GB is more flexible and better than 4GB + 4GB, but I doubt its better than 2GB + 8GB.


You are missing a lot of realities about how computers currently work. Instead of focusing on "how much", you need to be thinking about the time it takes to shuffle stuff around, over and over.

For example, GDDR5+DDR3 does not give more bandwidth than all GDDR5. For two reasons:

1. DDR3 is slower

2. in order to pass information between the two pools of ram, it has to go over the PCI-E bus. Which is slower than if it were just already sitting in fully accessible, shared ram. Not to mention, super fast shared ram. and that shared ram offers 176 GBps of bandwidth, over a unified bus. Which is more bandwidth than a 7870 and since this is the CPU's fully accessible memory pool as well, it means it's 3 times more bandwidth to the CPU than Intel i7 running quad Channel DDR3.

quote:"The 'supercharged' part, a lot of that comes from the use of the single unified pool of high-speed memory," said Cerny. The PS4 packs 8GB of GDDR5 RAM that's easily and fully addressable by both the CPU and GPU.

If you look at a PC, said Cerny, "if it had 8 gigabytes of memory on it, the CPU or GPU could only share about 1 percent of that memory on any given frame. That's simply a limit imposed by the speed of the PCIe. So, yes, there is substantial benefit to having a unified architecture on PS4, and it’s a very straightforward benefit that you get even on your first day of coding with the system. The growth in the system in later years will come more from having the enhanced PC GPU. And I guess that conversation gets into everything we did to enhance it."


The CPU and GPU are on a "very large single custom chip" created by AMD for Sony. "The eight Jaguar cores, the GPU and a large number of other units are all on the same die," said Cerny. The memory is not on the chip, however. Via a 256-bit bus, it communicates with the shared pool of ram at 176 GB per second.

"One thing we could have done is drop it down to 128-bit bus, which would drop the bandwidth to 88 gigabytes per second, and then have eDRAM on chip to bring the performance back up again," said Cerny. While that solution initially looked appealing to the team due to its ease of manufacturability, it was abandoned thanks to the complexity it would add for developers. "We did not want to create some kind of puzzle that the development community would have to solve in order to create their games. And so we stayed true to the philosophy of unified memory."


In fact, said Cerny, when he toured development studios asking what they wanted from the PlayStation 4, the "largest piece of feedback that we got is they wanted unified memory."




quote: After talking about the heavily tweaked compute abilities of the PS4...."But that vision created a major challenge: "Once we have this vision of asynchronous compute in the middle of the console lifecycle, the question then becomes, 'How do we create hardware to support it?'"

One barrier to this in a traditional PC hardware environment, he said, is communication between the CPU, GPU, and RAM. The PS4 architecture is designed to address that problem.

"A typical PC GPU has two buses," said Cerny. "There’s a bus the GPU uses to access VRAM, and there is a second bus that goes over the PCI Express that the GPU uses to access system memory. But whichever bus is used, the internal caches of the GPU become a significant barrier to CPU/GPU communication -- any time the GPU wants to read information the CPU wrote, or the GPU wants to write information so that the CPU can see it, time-consuming flushes of the GPU internal caches are required."



So as you can see, they eliminated time wasted swapping ram between two different ram speeds, over a relatively slow bus (PCI-E only has 16 GBps, on a 16 lane slot), as well as the latency created by all the wait states and cache flushing required.

The PS4 will also shoot way above it's grade on compute performance, from the GPU. Not only due to the huge boost in bandwidth, but also to the heavy modifications Sony asked AMD to make to the GCN architecture:

quote: 'The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands — the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that’s in the system."
 
Last edited:
The OS has to support it in order for it to work in a PC and MS and the 'nix guys do not have any support for a Unified Memory System so it was never really pushed. This idea has been around since the PC was first created

What's the point of creating software for hardware that doesn't exist? 360 and xbox one are both based on windows and their operating systems support unified memory. Compared to the hardware effort for unified memory on PCs the software effort barely exists.

'Nix and Windows have been ported to ARM, Power PC, and other architectures, but the hardware comes first.
 
What's the point of creating software for hardware that doesn't exist? 360 and xbox one are both based on windows and their operating systems support unified memory. Compared to the hardware effort for unified memory on PCs the software effort barely exists.

'Nix and Windows have been ported to ARM, Power PC, and other architectures, but the hardware comes first.

that's my point OS does not support it so why would hardware? Hardware does not have it, so why would OS support it. Since Windows is BY FAR the largest install base for PCs, why would they support something that does not exist?

360 and the XBONE will use a segmented memory just as current PCS do with intergrated graphics X dedicated to OS, Y dedicated to video. PS4 is that I know of the first truly unified memory space...
 
Hardware vs software support is a circular argument. The problem is it applies to everything and if people said well why support it, then that would be true of all advances and we would never go anywhere. So I don't think that software is an issue the problem seems to be that the gains are not big enough to justify building hardware for it. Windows typically updates to support new hardware by the next OS release whatever that hardware is, so typically hardware comes first then software.

It is interesting to note that your typical dual channel PC should have around 20GB/s of bandwidth already on the DDR3, then the 7870 itself has 156 gb/s now isn't that ironic that sony claims that 176 gbs is some big deal yet it ironically close to what is already out there? That can't be an accident.

Ultimately it will be interesting to see if this pans out but personally, I am sorry I just have my doubts, if AMD knew that all they had to do was slap GDDR5 on APUs and have a smash hit home run why have they not already done it for laptops? Surely MS would have supported it by the next windows release. MS actually at times seems to have a better relationship with AMD than intel. And lwe all know AMD needs any break they can get in order to compete with intel in the mobile space.

My guess is that in reality all the huffing and puffing about that doesn't really add up to that much more performance. We already heard why for CPUs its better to have DDR3 than GDDR5 so that means that is an issue that PS4 is taking a step back on in hopes that the combined memory will make up for it. And the other problem is all the explainations do not answer some of my core questions they are just quotes from Sony propaganda, that doesn't mean they are false of course but the actual amount of effect is still very questionable. We all only need to look at cell to see how supposed breakthroughs in power were never what was promised.
 
Hardware vs software support is a circular argument. The problem is it applies to everything and if people said well why support it, then that would be true of all advances and we would never go anywhere. So I don't think that software is an issue the problem seems to be that the gains are not big enough to justify building hardware for it. Windows typically updates to support new hardware by the next OS release whatever that hardware is, so typically hardware comes first then software.

It is interesting to note that your typical dual channel PC should have around 20GB/s of bandwidth already on the DDR3, then the 7870 itself has 156 gb/s now isn't that ironic that sony claims that 176 gbs is some big deal yet it ironically close to what is already out there? That can't be an accident.

Ultimately it will be interesting to see if this pans out but personally, I am sorry I just have my doubts, if AMD knew that all they had to do was slap GDDR5 on APUs and have a smash hit home run why have they not already done it for laptops? Surely MS would have supported it by the next windows release. MS actually at times seems to have a better relationship with AMD than intel. And lwe all know AMD needs any break they can get in order to compete with intel in the mobile space.

My guess is that in reality all the huffing and puffing about that doesn't really add up to that much more performance. We already heard why for CPUs its better to have DDR3 than GDDR5 so that means that is an issue that PS4 is taking a step back on in hopes that the combined memory will make up for it. And the other problem is all the explainations do not answer some of my core questions they are just quotes from Sony propaganda, that doesn't mean they are false of course but the actual amount of effect is still very questionable. We all only need to look at cell to see how supposed breakthroughs in power were never what was promised.

Because the laptops would cost too much. AMD sells into markets where price rule above anything else.
 
It is interesting to note that your typical dual channel PC should have around 20GB/s of bandwidth already on the DDR3, then the 7870 itself has 156 gb/s now isn't that ironic that sony claims that 176 gbs is some big deal yet it ironically close to what is already out there? That can't be an accident.
The important aspect here is that bandwidth is available to the CPU, as well. Effectively tripling CPU bandwidth normally available at the high end of the consumer level. This will be especially important for multi-threading and asynchronous computing.

Ultimately it will be interesting to see if this pans out but personally, I am sorry I just have my doubts, if AMD knew that all they had to do was slap GDDR5 on APUs and have a smash hit home run why have they not already done it for laptops? Surely MS would have supported it by the next windows release. MS actually at times seems to have a better relationship with AMD than intel. And lwe all know AMD needs any break they can get in order to compete with intel in the mobile space.

because GDDR5 is expensive. IT would make laptops cost more, when they don't need that performance. The average consumer doesn't need 176 gbps of CPU bandwidth for their portable internet and computing tasks. Sony are the ones who asked for that. Because they want the PS4 to be as good as possible.

My guess is that in reality all the huffing and puffing about that doesn't really add up to that much more performance. We already heard why for CPUs its better to have DDR3 than GDDR5
the argument is that DDR3 is lower latency. Ok, but faster ram is always higher latency. You could get DDR1 ram with 2-2-2 timings, but that's not better than DDR3 1600, even though it commonly has timings of 9-9-9. Nobody wants DDR1 on their i5. The extra speed and bandwidth over shadows the higher latency. It's the same principal with DDR5 Vs. DDR3. It's so much faster, the latency won't be an issue. and Sony has cut out the need for so much cache clearing and wait states. and Unified the GPU and CPU into the same memory pool, on a bus that is way faster than what the GPU normally uses to communicate with the rest of the system. So communicating with each other is exponentially faster. Sure, there are some non gaming/graphics situations in which tight ram timings are probably preferable. But PS4's setup is overall, better for gaming and better for the design intent. removing internal bottlenecks and support the possibilities of asynchronous compute.

We all only need to look at cell to see how supposed breakthroughs in power were never what was promised.

I would actually argue that Cell delivered in spades. I'm not going to waste a bunch of space detailing it. But there are things Cell can do with Folding@home and other similar scientific applications, that an i7 could only dream about. Similarly, Cell was adaptable to graphics pipeline duties that again, conventional CPUs can only dream about. If you look at the BF3 slides from Dice, they say straight up that Cell is better than an i7 for graphics pipeline work, and rivals current graphics cards, for certain duties.

http://www.slideshare.net/DICEStudio/spubased-deferred-shading-in-battlefield-3-for-playstation-3

the problem with Cell is that it takes highly customized code to work. and most games are made on middleware and API's that abstract itself from needing to be customized. At a cost of performance, but at a benefit of ease of development. As such, not every dev took advantage of it and game performance suffered, when the code wasn't custom tailored. This is probably also a good indication of why GPU compute on the PC side has been mostly ignored.

*Sony plans to use the beefed up compute abilities of PS4's AMD and the 8 thread capability of the CPU, to encourage a workflow similar to what was happening with the PS3's SPUs. They hope to have devs regularly doing this, within a couple of years.
 
Last edited:
I see someone trying far too hard to defend the Wii U...

That's what I see too. It feels like he's leaping to the conclusion that because the architecture CAN be implemented in a machine that's about the same speed as a PS4 that the Wii U must be about as fast as a PS4.

The underpants gnomes are missing some steps here.
 
The linked video is simply absurd and he really should not be trying to intentionally misinform people. The Wii U is not and will never be anywhere near as powerful as the X1 or PS4.
 
I don't know, it sounds to me more like a bunch of people making stupid assumptions based on shit they know nothing about.

Pretty much like 99.9% of the posts in this section of the forum. The previous gen was very different in respect to specs and the games didn't vary that much at all. Speaking to the 360's 512mb gddr3 vs the PS3 256mb xdr and 256mb gddr3.

I think they will be much closer than all the haters (which here rule the roost) make it out to be.
 
The linked video is simply absurd and he really should not be trying to intentionally misinform people. The Wii U is not and will never be anywhere near as powerful as the X1 or PS4.

I don't think you should try to compare it either but I don't think it's as weak as some make it out to be and I still believe it's definitely more powerful than the 360/PS3. At least it has backward compatibility :) there are some really awesome Wii games out there I had never played on top of a massive old school library between the eshop and Wii virtual console.

I look forward to both new systems gonna be a nice holiday this year!
 
I don't think you should try to compare it either but I don't think it's as weak as some make it out to be and I still believe it's definitely more powerful than the 360/PS3. At least it has backward compatibility :) there are some really awesome Wii games out there I had never played on top of a massive old school library between the eshop and Wii virtual console.

I look forward to both new systems gonna be a nice holiday this year!

Meeh... I wouldn't get a Wii U for Wii games. Only a small handful are actually all that good (Xenoblade and Last story had a lot of people going crazy, Super Mario Galaxy is great, Twilight Princess and Skyward Sword are good. That's most of it right there), and it's sometimes really hard to go from Xbox 360 to Wii stuff. HD graphics has spoiled me.
 
I've rambled a lot to those who don't understand the tech about DDR3 vs GDDR5 equiped cards and how the GDDR5 cards are superior.

Tom's Hardware recently took two cards, both equal in everything except memory. One has 2GB of DDR3 and the other has 1GB of GDDR5. The GDDR5 card performed almost 30% higher on average despite having 1/2 the memory.

SOURCE
 
Back
Top