PS5, Next Xbox Will Likely Have 8 to 12GB of RAM, Says Hellpoint Dev

I went back and read again. I think I read about Hynix announcing 8GB DDR5 ram and misread it as GDDR5. So I misread that, and apologize.

Looking at GDDR6 chips, they are still only 8Gb chips, so even then it does seem really impossible to get up to anything more than 12-16GB reasonable in a console form factor, if you factor in additional DDR4/5 for system memory as well.
Samsung is producing 16Gb GDDR6 chips.
 
Except there's still downsides. GDDR has *very* high latency compared to DDR. For GPUs, which need to push insane amounts of bandwidth and run at a relatively long (~16ms) cycle this is a non-factor, but it does degrade CPU memory read latency. And at the end of the day, each resource still endsup with less then they would have had if dedicated pools were used. Either the game makes sacrifices for graphical performance, or the graphical performance is secondary to the game. No matter what, someone gets squeezed.
I wonder what the actual hit in performance would be. A. high memory frequency can make up for high latency. B. the PS4 is designed so that GPU and CPU can read/write independently from eachother, with less redundancy and need for a "wait", etc. and there is an extra bus to facilitate this. In theory, this would null some of the issues of higher latency ratings. Because we aren't dealing with a traditional PC setup, upon which our general knowledge of such latency ratings is based.

*Indeed, everything is a balance. But the APU setup is cost effective and is probably the best balance, overall, for the forseeable future. I mean, of course favoring the GPU is the better choice. We can see that with the Xbone, which uses system RAM and a teeny ESRAM buffer. I doubt we will see large, dedicated RAM pools. Unless, MS or Sony decide they want to start losing a lot of money again, per console. Which they won't do that after the issues with the PS3 and 360.
 
Last edited:
We can see that with the Xbone, which uses system RAM and a teeny ESRAM buffer. I doubt we will see large, dedicated RAM pools.
That was only for the original XBone, the XBoneX uses GDDR5 just like the PS4/Slim/Pro.
I would be very surprised if they moved away from a unified memory architecture and went back to separate RAM and VRAM.

While unified memory doesn't benefit much outside of a controlled device like a console, for such devices it is an increasingly needed advantage, especially for the price/performance ratio of the consoles and their profit margins.
 
I wonder what the actual hit in performance would be. A. high memory frequency can make up for high latency. B. the PS4 is designed so that GPU and CPU can read/write independently from eachother, with less redundancy and need for a "wait", etc. and there is an extra bus to facilitate this. In theory, this would null some of the issues of higher latency ratings. Because we aren't dealing with a traditional PC setup, upon which our general knowledge of such latency ratings is based.

*Indeed, everything is a balance. But the APU setup is cost effective and is probably the best balance, overall, for the forseeable future. I mean, of course favoring the GPU is the better choice. We can see that with the Xbone, which uses system RAM and a teeny ESRAM buffer. I doubt we will see large, dedicated RAM pools. Unless, MS or Sony decide they want to start losing a lot of money again, per console. Which they won't do that after the issues with the PS3 and 360.

The XBOX's ESRAM buffer is a PITA to manage, but at least it's predictable.

APUs are used because they are cheap and have a low power profile; but it comes at a significant performance cost. Relative to PC performance, this is by far the weakest console generation ever. Case in point: The PS3 CPU is about twice as powerful as the PS4 CPU (~185 GFLOP versus ~105 GFLOP). This console generation was outright unprepared for the growth in the complexity of games and having to push 4k, hence the significant mid-generation release of what are essentially new consoles.
 
The XBOX's ESRAM buffer is a PITA to manage, but at least it's predictable.

APUs are used because they are cheap and have a low power profile; but it comes at a significant performance cost. Relative to PC performance, this is by far the weakest console generation ever. Case in point: The PS3 CPU is about twice as powerful as the PS4 CPU (~185 GFLOP versus ~105 GFLOP). This console generation was outright unprepared for the growth in the complexity of games and having to push 4k, hence the significant mid-generation release of what are essentially new consoles.

Eh? PS3 CPU blew massive chunks. There is a reason they went away from it. While its paper specs were impressive, its actual usable performance was way way way worse due to the significant programming limitations imposed by the SPUs. The SPUs basically sat in the uncanny valley being the worse of what CPUs and GPUs had to offer. There isn't a single developer out there that would trade the PS4 CPU for the PS3 CPU.

Also, no game actually pushes 4k. they upscale. Hell, even a 2080ti still has issues at 4k. 4k native just isn't worth it and likely never will be.
 
Eh? PS3 CPU blew massive chunks. There is a reason they went away from it. While its paper specs were impressive, its actual usable performance was way way way worse due to the significant programming limitations imposed by the SPUs. The SPUs basically sat in the uncanny valley being the worse of what CPUs and GPUs had to offer. There isn't a single developer out there that would trade the PS4 CPU for the PS3 CPU.

Also, no game actually pushes 4k. they upscale. Hell, even a 2080ti still has issues at 4k. 4k native just isn't worth it and likely never will be.

The Cell had a maximum theoretical throughput of ~240 GFLOPs. Due to aforementioned difficulties actually programming it, ~180 GFLOPs was considered "typical" performance. That's still significantly more then the ~110 GFLOPs the APU in the PS4 can handle.
 
Eh? PS3 CPU blew massive chunks. There is a reason they went away from it. While its paper specs were impressive, its actual usable performance was way way way worse due to the significant programming limitations imposed by the SPUs. The SPUs basically sat in the uncanny valley being the worse of what CPUs and GPUs had to offer. There isn't a single developer out there that would trade the PS4 CPU for the PS3 CPU.

Also, no game actually pushes 4k. they upscale. Hell, even a 2080ti still has issues at 4k. 4k native just isn't worth it and likely never will be.
There are plenty of games pushing out native 4K on consoles right now. Forza 7 is 4K/60 on the Xbox One X.
 
The Cell had a maximum theoretical throughput of ~240 GFLOPs. Due to aforementioned difficulties actually programming it, ~180 GFLOPs was considered "typical" performance. That's still significantly more then the ~110 GFLOPs the APU in the PS4 can handle.

Isn’t that only for very specific tasks under very specific workloads? I mean Naughty Dog took months designing The Last of Us to take advantage of Cell and make it work with the SPUs but moved on to Jaguar in just a few weeks using very similar code.

I don’t think any developer on earth would have wanted a 2013 updated Cell for PS4. x86 or PowerPC were the only logical choice and APUs where they were back then wouldn’t have worked so it was ever only Jaguar or PowerPC.

Cell was powerful but you never saw it in comparison to the 360.

EDIT: To clarify TLoU was remastered for PS4 where ND took the source code and used Jaguars multi threading capabilities to do what the SPUs did with much greater effect and was extremely quick.
 
Last edited:
The Cell had a maximum theoretical throughput of ~240 GFLOPs. Due to aforementioned difficulties actually programming it, ~180 GFLOPs was considered "typical" performance. That's still significantly more then the ~110 GFLOPs the APU in the PS4 can handle.

LOL, 75% of peak typical? You are in a dreamworld with those numbers. Lets put it this way, even with a super easy to program vector CPU, 75% of peak is considered insanely good in actual useful programs. Hell, no one is getting 75% on PS4 with a much much less complex and easier to program for system. Realistic performance levels in actual programs of the SPUs was in the 10-15% range at best. Cell was widely considered a technical failure and dead end by Sony itself due to the complexity issues. Developers have been able to get much more actual performance from PS4 CPU than Cell was ever able to deliver and do it with a fraction of the development resources.
 
The Cell had a maximum theoretical throughput of ~240 GFLOPs. Due to aforementioned difficulties actually programming it, ~180 GFLOPs was considered "typical" performance. That's still significantly more then the ~110 GFLOPs the APU in the PS4 can handle.

The PlayStation 3 units were all equiped with an IBM Cell CPU @ 3.2GHz with 1 PPE (PowerPC general-purpose CPU core with SMT) and 7 SPEs (8 natively on the Cell, with 1 SPE disabled in the PS3 Cell and 1 dedicated to the hypervisor/OS) with roughly 25 GFLOPS FP32 in the PPE and 150 GFLOPS FP32 in the 6 available SPEs (25 GFLOPS per SPE) - this was the 175-180 GFLOPS performance you are talking about which was dedicated to the game software.
While the IBM PowerPC-based Cell is *techically* faster in floating point operations compared to the AMD Jaguar, the Jaguar completely destroys the Cell in integer operations, not to mention most of the physics and other floating point operations have been moved from the CPU to the GPU on the current-gen consoles, thus resulting in less of a need of it on the CPU.

The Cell also allowed software technologies like NVIDIA PhysX to be capable of even running on the PS3, since the GPU in it was the generation right before the G80 which did not have a unified-shader architecture and was not itself capable of running PhysX - the Cell CPU did all of that with its SPE units.
Without the Cell, that wouldn't have been possible on the PS3 with games like Batman Arkham Asylum - it was also released on the 360, but due to the lack of NVIDIA PhysX via the Cell SPE units (360 Xenon CPU didn't have these - was basically 3 FPU-upgraded PPEs) there were some features not available:
With PhysX enabled, some areas contain smoke or fog which reacts to Batman moving through it, while with PhysX disabled the fog will not appear at all.
Other effects include dynamic interaction with paper and leaves, surfaces which can be scratched and chipped, and dynamic, destructible cloth elements such as banners and cobwebs.

I'm not saying the 360 wasn't a capable console, as it certainly was, and the shared memory pool made things much more flexible compared to the PS3's dedicated RAM and VRAM pools.
The IBM Cell CPU carried the PS3 far further than it probably should have, which is a testament to the capabilities of it, especially if the programming was done right and highly optimized, which was anything but an easy task for any software development team.
 
Last edited:
Relative to PC performance, this is by far the weakest console generation ever.
On this point, you are right.
When the 360 and PS3 released, in terms of CPU and GPU processing abilities and performance, they absolutely demolished top-end PC x86 equipment and OEM GPUs at the time for at least the first few years until faster dual (and eventually quad) core CPUs and the next generation AMD/ATI and NVIDIA GPUs were released.
 
Back
Top