PS5/XBOX X vs 3080Ti/Big Navi

Crazy hype for console performance has been around forever. Sony claimed the PS2 would give you Pixar-quality images in real time, the PS3 was touted as the greatest revolution in gaming ever with its exotic processor, etc.
The PS3 and PS4 both launched with the graphical performance of a $200-at-the-time graphics card (7900 GT/Radeon 7870); at best we're talking a bump to a lower-high-end class of GPU (x070) if there is a substantial price bump as well. At this point consoles and PC's share the same engines, the same API's, and the same microarchitectures, so there's no magic; at best, consoles get a small boost from having an all-GDDR based memory design.
 
i never understand these threads: wait until the hardware ships. It's likely that the consoles will be tough to get when they launch and the 3080ti will likely sell out asap anyway -- wait for benchmarks and for what games ship for what platforms and then decide: speculating like this makes no sense.
 
We all know how that turned out for the PS4.

Software for consoles in the release window usually does suck as developers continue to push out games for the last generation hardware. It usually take about 1-1.5 years to see any actual significant improvement in games when a new console generation comes out. I read somewhere that Valhalla is not using ray tracing, probably because of this.

I think that was likely devs reacting to the cell processor of the ps3 and the ps4 being a traditional x86 chip.
 
We all know how that turned out for the PS4.
Software for consoles in the release window usually does suck as developers continue to push out games for the last generation hardware. It usually take about 1-1.5 years to see any actual significant improvement in games when a new console generation comes out. I read somewhere that Valhalla is not using ray tracing, probably because of this.
Initial games will probably not look much better than current generation but current generation issue is not that they look bad but that their frame rates are pathetic.
PS4/XO games often looked like PC versions but with low/medium details. First batch of PS5/XSX games will look like PC versions at high/very_high setting and will run at 60fps and that imho is enough.

It will take years to utilize all the new features of RDNA2 GPU. These new features are exactly the same as the difference between DirectX 12 and DirectX 12 Ultimate and we already have them for almost two years on PC and still only few games really use any of them and not all of them at once.
 
I think that was likely devs reacting to the cell processor of the ps3 and the ps4 being a traditional x86 chip.
PS3 issue was not that it wasn't X86 but because it had only one core. Cell had few additional semi-cores that could not even manage their own memory accesses and could only do limited set of vector calculations on local cache memory. In general CPU performance was there but that it was very hard to utilize it.

If all cores in the Cell were normal cores then CPU performance would be even better than PS4 CPU had. X360 had thee normal cores and the last thing anyone had issue with this console was CPU performance.
Being PowerPC rather than X86 was not an issue because most code is written in C/C++ and assembler optimizations are only used to speed up few critical routines and rewriting them is not such a big issue. I would even say that good programming practice is that you first write your code in C/C++ and then rewrite in assembly to have good reference for debugging. If anything it was PC ports/versions that suffered because of PowerPC nature of previous generation consoles because developers could just compile their C/C++ code or worse: use automatic translation tools to generate C/C++ code from PowerPC assembly and compile that...

This PS4/XO having X86 being some kind of advantage is just a myth. Most important improvement this generation brought and which made developers life much easier was having few gigabytes of memory vs 512MB (minus some used by dashboard/OS) on previous generation. CPU performance of current consoles is pretty pathetic.
 
PS3 issue was not that it wasn't X86 but because it had only one core. Cell had few additional semi-cores that could not even manage their own memory accesses and could only do limited set of vector calculations on local cache memory. In general CPU performance was there but that it was very hard to utilize it.

If all cores in the Cell were normal cores then CPU performance would be even better than PS4 CPU had. X360 had thee normal cores and the last thing anyone had issue with this console was CPU performance.
Being PowerPC rather than X86 was not an issue because most code is written in C/C++ and assembler optimizations are only used to speed up few critical routines and rewriting them is not such a big issue. I would even say that good programming practice is that you first write your code in C/C++ and then rewrite in assembly to have good reference for debugging. If anything it was PC ports/versions that suffered because of PowerPC nature of previous generation consoles because developers could just compile their C/C++ code or worse: use automatic translation tools to generate C/C++ code from PowerPC assembly and compile that...

This PS4/XO having X86 being some kind of advantage is just a myth. Most important improvement this generation brought and which made developers life much easier was having few gigabytes of memory vs 512MB (minus some used by dashboard/OS) on previous generation. CPU performance of current consoles is pretty pathetic.
Are you sure dude? I recall hearing devs complaining about Cell and being joyed to get back to x86 but I dunno
 
Are you sure dude? I recall hearing devs complaining about Cell and being joyed to get back to x86 but I dunno

You are both right. XoR hit the main target on the head that the main source of pain was porting.

The memory aspect should be mitigated to some extent by NVME and having a bit more freedom for devs to leave some calls in disk space instead of filling up RAM.
 
Are you sure dude? I recall hearing devs complaining about Cell and being joyed to get back to x86 but I dunno
Devs were dissatisfied with Cell because of two aspects:
1. PowerPC architecture which made them need to produce two sets of code paths
2. Strange core structure with only one normal core and bunch of pretty much useless cores

re 1. Even if most code was in C/C++ there could still be issues because PowerPC is big endian and X86 is little endian so any part of code where you manipulate bytes directly would break. These kind of bugs are also hard to debug. It is possible to write completely endian independent code but it is not exactly the fastest way to program things and in game engines performance matters. There is also whole aspect of direct assembly code which cannot by just directly run and need to be rewritten. It is simply easier for programmers to have the same processor type in all devices. BTW. ARM for example is almost universally used in little endian so all C/C++ code should work straight away without issues.

re 2. Notice how developers didn't really complain that much about Xenon, the CPU of X360. It had less cores and because of that less raw processing power than Cell but you could program it in the same way you programmed for PC.
Cell SPE was harder to program for but actually it was very similar to how we currently program GPGPU and in theory was pretty powerful for many tasks which game engines need to perform like geometry transformations, post processing filters, physics calculation, DSP, etc. At the time GPGPU technology was only started to be a thing and even multi core programming was pretty much a new thing. When you have to choose platforms you concentrate your efforts it was obvious PS3 was the minority, its sales weren't even that great, comparable to X360... so in the end code that on X360 and PC ran on two cores/threads had to be executed on one core/thread on PS3... and games still needed to run 30fps on both consoles thus for developers Cell was simply a headache.

In latter years as code base for SPE increased and SONY provided more functions it was utilized much more than at the beginning. Good example would be MLAA which on X360 ran on GPU and took quite a significant performance hit and SONY provided it in PS3 devkits and it executed on SPEs with zero GPU time cost.

Developers were certainly not satisfied with CPU performance of PS4/XO CPUs but at least they then had unified ecosystem, no need to maintain code paths for different processor architectures and any optimizations they did for one console would nicely carry on to the other and PC. Compared to mess that was previous generation it was surely an improvement and reason to be happier 🙂
 
Back
Top