Next-gen Xbox Scarlett specs: 12TFLOPs, 16GB RAM, 3.5GHz Zen 2 CPU

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,874
Xbox Lockhart ram performance seems suboptimal compared to Scarlet.


"
ELN9ZB4X0AEEj6Q?format=png&name=360x360.png
"

 
Memory sizes don't tell us anything about the actual memory clock speed, just the bus width.

To feed something more powerful than a RX 5700 XT means you need 384 bits 14gbps GDDR6 , so 24GB works.

20GB can work with either 320-bit or 160-bit. I would probably use 12 Gbps 160-bit GDDR6 to save costs (same trick Nvidia pull with the 1660 Ti versus the rest of Turing), which would explain why it's having trouble meeting the 1440p performance level (less bandwidth than a 1660 ti)!

Also, like most development kits, they're likely to cut memory in half. 10 and 12GB sound just fine for next generation of sub-$400 APUs.
 
Last edited:
I dunno if I believe 16gb ram in the top-end .256-bit bus is not going to be enough bandwidth for an APU with 30% higher performance than the 5700 XT. And 512-bit is just nuts for a $500 console!

The only sane option is 384-bits (currently in uses on the One X). So the only sane memory capacity is 12GB for the high-end, and who knows f for low-end?

For low-end Lockhart, 8 gb(128-bit) or 10gb (160-bit) could work.

Tweaktown is just guessing by doubling the am size of thePS4 They're completely ignoring how bandwidth hungry doubling that performance of the Xbox one X would be!
 
Last edited:
Memory sizes don't tell us anything about the actual memory clock speed, just the bus width.

To feed something more powerful than a RX 5700 XT means you need 384 bits 14gbps GDDR6 , so 24GB works.

20GB can work with either 320-bit or 160-bit. I would probably use 12 Gbps 160-bit GDDR6 to save costs (same trick Nvidia pull with the 1660 Ti versus the rest of Turing), which would explain why it's having trouble meeting the 1440p performance level (less bandwidth than a 1660 ti)!

Also, like most development kits, they're likely to cut memory in half. 10 and 12GB sound just fine for next generation of sub-$400 APUs.

160 bit GDDR6 would be rather crappy. My guess is a 320 bit GDDR5. It would basically be a 5/6th One X with a much better CPU.
 
160 bit GDDR6 would be rather crappy. My guess is a 320 bit GDDR5. It would basically be a 5/6th One X with a much better CPU.

Remember, there are two parts here: 12 Tflops (almost certainly 384 bit gddr6), and 4 Tflops cut version with either 10gb (320-bit or 160-bit), or possibly cut to 8gb 128-bit gddr6?

Since Lockhart is either 4-5 Tflops (depending on the leak), it's well within range of 160-bit GDDR5. See the GTX1060 5GB card:

https://www.techpowerup.com/gpu-specs/geforce-gtx-1060-5-gb.c3060

If you replaced it with 9gbps memory, the bandwidth would be nearly identical to the 192-bit version of the 1060. And from what we've seen, AMD's Navi has caught-up with Pascal on memory efficiency.
 
Last edited:
There aren't enough guffaws in the world for all the "next xbox 4k 120FPS!!111!!1122" article titles.
 
Remember, there are two parts here: 12 Tflops (almost certainly 384 bit gddr6), and 4 Tflops cut version with either 10gb (320-bit or 160-bit), or possibly cut to 8gb 128-bit gddr6?

Since Lockhart is either 4-5 Tflops (depending on the leak), it's well within range of 160-bit GDDR5. See the GTX1060 5GB card:

https://www.techpowerup.com/gpu-specs/geforce-gtx-1060-5-gb.c3060

If you replaced it with 9gbps memory, the bandwidth would be nearly identical to the 192-bit version of the 1060. And from what we've seen, AMD's Navi has caught-up with Pascal on memory efficiency.

These numbers would make more sense if both had 8 gb of system memory. That would leave Lockhart with 12 GB of 384 bit GDDR5. (cost savings and perhaps sharing OneX board layout) The 800 mhz could be the DDR4 speed (3200 mhz effective).

Anaconda would then have 16 GB of 256 GDDR6, and at 1900 mhz gddr6 (15.2 gbps effective), should have bandwidth
 
These numbers would make more sense if both had 8 gb of system memory. That would leave Lockhart with 12 GB of 384 bit GDDR5. (cost savings and perhaps sharing OneX board layout) The 800 mhz could be the DDR4 speed (3200 mhz effective).

Anaconda would then have 16 GB of 256 GDDR6, and at 1900 mhz gddr6 (15.2 gbps effective), should have bandwidth


Everyone is so fixated on it having unified GDDR6 memory? Why can't it have dedicated DDR4 for swapping the OS, like the PS4 Pro? THIS ALLOWS GAMES FULL ACCESS TO THE 8GB GDDR5 ram.

You don't have to worry about wasting parts of expensive GDDR6 if you have dedicated OS ram. I imaging the PS5 will use the same trick.

16GB GDDR6 means you're stuck with 2556-bit bus (not enough bandwidth for 12 Tflops of power) or 512-bit bus (way more expensive than setting up two different types of ram on the same board).

384-bit bus with 12GB ram dedicated to the running game is the best compromise. The current Xbox one X has 9GB available for games, so bumping that to 12 would be solid. 24GB GDDR6 would be out of this world, but not really affordable in the year 2020.
 
Last edited:
Everyone is so fixated on it having unified GDDR6 memory. Why can't it have dedicated DDR4 for swapping the OS, like the PS4 Pro?

You don't have to worry about wasting parts of expensive GDDR6 if you have dedicated OS ram. I imaging the PS5 will use the same trick.

Well it might, remember these are not necessarily full detailed specs something like that could be left out. However if it doesn't why not? I could see two good reasons:

1) Cost vs gain. Adding on more memory costs more, even when it is cheaper DDR4 memory. So does the additional complexity of the memory controller and system board. Not a ton of cost, but it is still money that could be spent elsewhere, or taken off the price. So unless they decide the gain is enough to justify said cost,it'll get left off.

2) It's SSD may be able to perform the same duty. If you look in to it, the PS4 Pro uses the extra RAM more as swap space than RAM. It doesn't run the OS from it when you are using the OS, rather it swaps out unused parts when you are playing a game. Well, SSDs,particularly nVME ones, are fast enough that for a swap operation like that they would work plenty well. So the console could very well do the same thing to the SSD, with no need for additional RAM.

Remember with a device like a console it is all about tradeoffs. They can't just throw more hardware at a problem and not worry about it. They always want to offer the most for the least, which means evaluating everything on a cost/benefit basis. I mean really the *best* way of doing things in non-unified memory like a PC. Give the CPU and GPU their own RAM so there's no contention for bandwidth. Problem is, that drastically increases the amount of RAM you need and thus the cost.
 
16GB GDDR6 means you're stuck with 2556-bit bus (not enough bandwidth for 12 Tflops of power) or 512-bit bus (way more expensive than setting up two different types of ram on the same board).

The 5700xt runs 10 flops using just 448 gbps. Using faster 2080 Super-like gddr6 will give it about the same ratio with just a 256 bit bus.
 
Also, keep in mind that the cheapest 384-bit gddr6 card is $1000. Part of that is Nvidia being Nvidia, but it seems to be much more expensive using GDDR6 than GDDR5. A 512-bit bus is basically impossible to do for some reason with GDDR6. This was done with relative ease using GDDR5 with the RX 290.

With high enough clocks, 12 Tflops of Navi could be properly fed with a 256 bit GDDR6 setup. If 384 bit was too expensive for the 2080 Super, it is most likely too expensive for this console.
 
The 5700xt runs 10 flops using just 448 gbps. Using faster 2080 Super-like gddr6 will give it about the same ratio with just a 256 bit bus.


That's powering a $750 graphic card.

Remember, we're trying for a system cost $250 less than that 2080 super! . and that starts by using more mainstream 14Gbps GDDR6. Even cards like the RX 5500/1660 Super are shipping with it, telling you how much cheaper it is.

It will be cheaper to ship 12GB GDDR6 at 14GB/s in 384-bit than it will to ship 16GB of ultra-expensive 15.5GBs ram in 256-bit. You also get 35% more bandwidth for the change to 384-bit!

REMEMBER HOW GDDR5X served an overpriced corner case? The 15.5 is going to serve the same purpose for GDDR6.

GDDR5X allowed Nvidia to sell the GTX 1080 for premium, while raking in the profit on the incredibly cheap to build 1070. With such a huge difference in performance between the two New Xboxes, you can't pull the same corner case on ram speed.

Remember, the PS4 launched with GDDR5 at 5.5 Gbps, in an era where 7 Gbps was becoming common. They went with mainstream 256-bit over higher speed/slimmer bus, because you need multiple suppliers to make consoles in-quantity, AND as cheap as possible. Wider and slower won the design win!

384-bit ram is already used successfully on the $500 Xbox one X, so I'm not seeing your argument that faster/slimmer is going to be cheaper,
 
Last edited:
I haven't had a console in a while but am seriously interested. People can rip console all they want but it is still a bigger market. Also games like Mortal Kombat on PC has no chance against console.

Other than First Person shooters, console offers more playability without hardware limits, resident evil 2 and 3 remakes and possibly new Original Metal Gear brings back memories of psx days.
 
Xbox Lockhart ram performance seems suboptimal compared to Scarlet.


"View attachment 205915"



Hmm every leak I've seen says Anaconda = 16GB total system ram, that's os+games, with games having access to 13GB for both the game and video.

These numbers do seem weak to me, but I guess that checkerboard will be the name of the game for 4k.
 
By the time it comes out it's still going to be 2 generations behind PCs. Shame, I thought we might be getting something exciting in a console again like what Gen 7 brought us, but they're going to be just one step above a cell phone yet again.
 
Hmm every leak I've seen says Anaconda = 16GB total system ram, that's os+games, with games having access to 13GB for both the game and video.

These numbers do seem weak to me, but I guess that checkerboard will be the name of the game for 4k.

13 GB of vRam would be more than enough for the life if this console. Even 8 GB seems to be enough for this amount of gpu power as we have seen no issues with the RTX 2080 over the GTX 1080ti. Demands of vram have not changed in the last couple of years and I doubt things will change fast going forward.
 
13 GB of vRam would be more than enough for the life if this console. Even 8 GB seems to be enough for this amount of gpu power as we have seen no issues with the RTX 2080 over the GTX 1080ti. Demands of vram have not changed in the last couple of years and I doubt things will change fast going forward.
That 13GB is shared memory, not just video memory. And Microsoft says it will support 8K. Even reconstructed 8K is going to require more memory available for the frame buffer. If you're using 8-10GB for video that doesn't leave a whole lot for the game engine to work with outside of graphics. The result will be a continued lack in innovation on the front of scale with things like numbers of active actors and overall size of the game world. That is not a good sign. 24-32GB of shared total memory would make more sense for some better future proofing in your typical 5-7 year console cycle.
 
Also games like Mortal Kombat on PC has no chance against console.

Bad example

I have MK9, MK10, and MK11 on PC. I host tournaments with it at LAN parties the last couple years. I have the Xbox 360 Mortal Kombat arcade controller pads (that work perfect on PC via USB), and a PC that gets 60FPS rock solid with highest detail, which is more than any console can muster at their 'we can try for mostly 60FPS with lowered detail, or we can lock in at 30FPS with medium high detail" mentality.

Point is - on your MK game example I certainly have a finer experience on my PC than any console can provide.

AB643F54-3A1D-4CE6-832E-AC74E1FC08A8.jpeg CF146A27-A2A5-405E-A261-AC40BE8371E7.jpeg 05F46FC6-2A42-4018-B918-0A905E712897.jpeg F6723B66-1118-42E5-9E6C-131B54AC50FE.jpeg 58E78237-5537-45A8-907B-0609F18E6F2B.jpeg
 
Last edited:
Hmm every leak I've seen says Anaconda = 16GB total system ram, that's os+games, with games having access to 13GB for both the game and video.

These numbers do seem weak to me, but I guess that checkerboard will be the name of the game for 4k.

And how do you keep a graphics card %30 faster than the RX 5700 XT fed on 256-bit ram? Especially when you're adding higher bandwidth than traditional rendering (with RT acceleration).

See here where the Radeon VII pulls away by 10% at 4k because it's got bandwidth to spare vs Navi.You're going to run into the same issue (just increased) on a mainstream console with the same 14Gbps GDDR6 ram at 256-bit. powering 12 Teraflops APU!

relative-performance_3840-2160.png


Someone want to tell us again how they could possibly make he 256-bit bus and 16GB GDDR6 ram work WHEN You're feeding 30% more performancee AND new RT AND the rest of he APU?? The 12GB ram at 384-bit bus is a lot more feasible!

And no, I've already covered the exorbitant cost of corner-case 16Gbps GDDR6 above, so expensive that it will never ship outside a $750 8GB ram graphics card in 2020. For consoles you need mainstream, so it's the same 14Gbps ram as the RX 5700, or nothing!
 
Last edited:
And how do you keep a graphics card %30 faster than the RX 5700 XT fed on 256-bit ram? Especially when you're adding higher bandwidth than traditional rendering (with RT acceleration).

See here where the Radeon VII pulls away by 10% at 4k because it's got bandwidth to spare vs Navi.You're going to run into the same issue on a mainstream console with the same 14Gbps GDDR6 ram at 256-bit. powering 12 Teraflops APU!

View attachment 206282

Someone want to tell us again how they could possibly make he 256-bit bus and 16GB GDDR6 ram work WHEN You're feeding 30% more performancee AND new RT AND the rest of he APU?? The 12GB ram at 384-bit bus is a lot more feasible!

And no, I've already covered the exorbitant cost of corner-case 16Gbps GDDR6 above, so expensive that it will never ship outside a $750 8GB ram graphics card in 2020. For consoles you need mainstream, so it's the same 14Gbps ram as the RX 5700, or nothing!
You're way too hung up on memory bandwidth. The reason the Radeon VII is faster is because it has more CUs and a higher clock speed compared to the 5700 XT. Look at the 2070 Super: It has 8GB of GDDR6 on a 256-bit bus, the same as the 5700 XT and producing the same 448 GB/s bandwidth as the 5700 XT. Same with the 2080. The 1080 Ti uses 11 Gbps GDDR5X on a 352-bit bus, producing a bandwidth of 484 GB/s and there were no indications that it was bandwidth starved. The only video card I've ever had that was bandwidth starved was the GTX Titan X, which demonstrated double-digit performance gains just from overclocking the memory.
 
Bad example

I have MK9, MK10, and MK11 on PC. I host tournaments with it at LAN parties the last couple years. I have the Xbox 360 Mortal Kombat arcade controller pads (that work perfect on PC via USB), and a PC that gets 60FPS rock solid with highest detail, which is more than any console can muster at their 'we can try for mostly 60FPS with lowered detail, or we can lock in at 30FPS with medium high detail" mentality.

Point is - on your MK game example I certainly have a finer experience on my PC than any console can provide.

View attachment 206266 View attachment 206267 View attachment 206264 View attachment 206262 View attachment 206263


1) you spent a ridiculous amount of money for that setup that is hardly practical. In fact it is a counter point, you need thousands of dollars to achieve what a couple hundred does.
2) Xbox 360 is not old tech but it still is plug and play on any television setup you have.
3) FPS is a PC thing most console games are seamless anyway given the games are optimized for them, the only way you can really know is by porting and porting delivers poor performance.
 
1) you spent a ridiculous amount of money for that setup that is hardly practical. In fact it is a counter point, you need thousands of dollars to achieve what a couple hundred does.
2) Xbox 360 is not old tech but it still is plug and play on any television setup you have.
3) FPS is a PC thing most console games are seamless anyway given the games are optimized for them, the only way you can really know is by porting and porting delivers poor performance.

1)I use this setup for console games and PC games. Therefore price and practicality is immaterial to this discussion.

2)My PC is plug and play - just like my Xbox 360.

3)I still have the xbox 360 console I had when I bought the two MK arcade controllers, and can and have compared it to my PC...Trust me when I say Mortal Kombat loads faster, runs smoother, and looks better on PC at max settings and 60FPS lock over console 30FPS lock with lesser detail.

MK is the game you used as an example as to a game that PC couldn’t touch. Clearly your example was bad.

——

Oh and twisting the blade a bit --- I can still use the MK 9 arcade controllers years and two MK games later on PC. (9,10,11)

Do these Xbox 360 MK 9 arcade controllers work on the subsequent Xbox One and Xbox One X for MK 10 and 11?
No. (Artificially limited to NOT work. The MK 9 arcade sticks are USB controllers - there is no REAL reason why they couldn't work - except the console makers want you to buy new hardware every gen)

Will they work on Xbox 2020 Scarlett?
No. See above.

Will they still work on MK12 release in the next year or two on PC?
Yes, almost beyond a doubt.
 
Last edited:
And how do you keep a graphics card %30 faster than the RX 5700 XT fed on 256-bit ram? Especially when you're adding higher bandwidth than traditional rendering (with RT acceleration).

See here where the Radeon VII pulls away by 10% at 4k because it's got bandwidth to spare vs Navi.You're going to run into the same issue (just increased) on a mainstream console with the same 14Gbps GDDR6 ram at 256-bit. powering 12 Teraflops APU!

View attachment 206282

Someone want to tell us again how they could possibly make he 256-bit bus and 16GB GDDR6 ram work WHEN You're feeding 30% more performancee AND new RT AND the rest of he APU?? The 12GB ram at 384-bit bus is a lot more feasible!

And no, I've already covered the exorbitant cost of corner-case 16Gbps GDDR6 above, so expensive that it will never ship outside a $750 8GB ram graphics card in 2020. For consoles you need mainstream, so it's the same 14Gbps ram as the RX 5700, or nothing!

The 2080 Super is definitely bandwidth starved even with the expensive memory. This begs the question, why didn't nvidia use slow 384 bit (or 352 bit) over 256 bit with that card?
 
defaultluser
Well that's also part of the reason why I said that checkerboard was the name of the game most likely if true.

With checkerboard 4k, you are actually looking at equivalent 1500p requirements (for 2160p CB) since you only calculate half of the pixels every frame and do a smart temporal interpolation for the half not calculated.

The end effect can be pretty impressive, and realistic for the budget of these consoles, if you want to see an amazing checkerboard rendering example you should see horizon zero dawn on the ps4 pro at 4k, in that case they are actually using 1800p equivalent if I recall correctly, there's a digital foundry interview about the way Guerrilla Games achieved this.

Since most people is used to streaming 4k, I would venture that proper CB can surpass their expectations.
 
You're way too hung up on memory bandwidth. The reason the Radeon VII is faster is because it has more CUs and a higher clock speed compared to the 5700 XT. Look at the 2070 Super: It has 8GB of GDDR6 on a 256-bit bus, the same as the 5700 XT and producing the same 448 GB/s bandwidth as the 5700 XT. Same with the 2080. The 1080 Ti uses 11 Gbps GDDR5X on a 352-bit bus, producing a bandwidth of 484 GB/s and there were no indications that it was bandwidth starved. The only video card I've ever had that was bandwidth starved was the GTX Titan X, which demonstrated double-digit performance gains just from overclocking the memory.

This article illustrates how bandwidths starved the 2080 Super is:
https://babeltechreviews.com/the-rt...wdown-highlighting-the-architectural-changes/

Basically, the extra cores of the 2080 Super are useless over the 2080. There is good indication that the 2080 would be every bit as fast as the 2080 Super if it had the faster memory.
 
That 13GB is shared memory, not just video memory. And Microsoft says it will support 8K. Even reconstructed 8K is going to require more memory available for the frame buffer. If you're using 8-10GB for video that doesn't leave a whole lot for the game engine to work with outside of graphics. The result will be a continued lack in innovation on the front of scale with things like numbers of active actors and overall size of the game world. That is not a good sign. 24-32GB of shared total memory would make more sense for some better future proofing in your typical 5-7 year console cycle.

13GB would he dedicated to video with 3 GB to the system and that is worst case scenarios. It is more than enough even after 5+ years.

8k support doesn't mean 8k gaming and most assumed as much. It is saying it has support for 8k video.
 
13GB would he dedicated to video with 3 GB to the system and that is worst case scenarios. It is more than enough even after 5+ years.

8k support doesn't mean 8k gaming and most assumed as much. It is saying it has support for 8k video.

No, 13 are for the game, again this isn't video only or haven't you seen your task manager how any program requires system ram.

3 GB are set aside for the OS alone.
If it was 24GB sure you could say 13 GB for vram, but we are talking 16 GB TOTAL.

Edit :
Let's say that the game code occupies 5GB ram then your vram will be limited to 8GB in that one case. This limits everything you can do.
 
No, 13 are for the game, again this isn't video only or haven't you seen your task manager how any program requires system ram.

3 GB are set aside for the OS alone.
If it was 24GB sure you could say 13 GB for vram, but we are talking 16 GB TOTAL.

Edit :
Let's say that the game code occupies 5GB ram then your vram will be limited to 8GB in that one case. This limits everything you can do.
This is true, and also don't forget that these consoles are using not shared memory, but unified memory.
With unified memory, there are better efficiencies that shared memory or conventional separate RAM and VRAM would not have.

You are correct, though, that the resources of the 16GB GDDR6 memory is still shared between the CPU and GPU in the APU, with ~3GB reserved for the OS and the other 13GB for the CPU/software and GPU/graphics.
 
By the time it comes out it's still going to be 2 generations behind PCs. Shame, I thought we might be getting something exciting in a console again like what Gen 7 brought us, but they're going to be just one step above a cell phone yet again.

You make it sound as if PCs are moving at a blistering pace these days. The mainstream offerings in the $150-$200 range for 2020 are not going to really blow away what was offered 3 years prior with the RX480.

The only things stopping the OneX from blowing away most PC builds was a halfway descent CPU. Now that Scarlett will have this and an even more powerful gpu makes me chuckle at your rather elitist comment.
 
in a blog post by Xbox head Phil Spencer, the company revealed a few more details, most notably, Microsoft confirmed that the Series X will feature up to 12 teraflops of GPU performance.

The Xbox Series X will feature an NVMe SSD, and a new "quick resume" capability will let you instantly jump back into multiple titles from where you left off. That's something you could do with a single game on the Xbox One, but it didn't always work reliably and required using the "Instant On" power mode, which left the console in standby instead of completely powering it off. Additionally, Microsoft is implementing "Dynamic Latency Input" (DLI) in the Xbox wirelss controller to reduce the delay between the buttons you're pressing and what appears on the screen.
 
Definately will be a powerhouse both hardware wise and software wise. Phil estimated that 10TF of RDNA was already double the performance of 6TF of GCN from the OneX. Now you are looking close to 2.5x the gpu power of the OneX without any of the cpu bottlenecks. It looks like the new console will be using full RDNA 2.0 as well.

New software features will help as well in performance such as VSR. 4k60hz and HD120hz seems very realistic at this point.
 
Bad example

I have MK9, MK10, and MK11 on PC. I host tournaments with it at LAN parties the last couple years. I have the Xbox 360 Mortal Kombat arcade controller pads (that work perfect on PC via USB), and a PC that gets 60FPS rock solid with highest detail, which is more than any console can muster at their 'we can try for mostly 60FPS with lowered detail, or we can lock in at 30FPS with medium high detail" mentality.

Point is - on your MK game example I certainly have a finer experience on my PC than any console can provide.

View attachment 206266 View attachment 206267 View attachment 206264 View attachment 206262 View attachment 206263

Um do you need a roommate? lol
 
You're way too hung up on memory bandwidth. The reason the Radeon VII is faster is because it has more CUs and a higher clock speed compared to the 5700 XT. Look at the 2070 Super: It has 8GB of GDDR6 on a 256-bit bus, the same as the 5700 XT and producing the same 448 GB/s bandwidth as the 5700 XT. Same with the 2080. The 1080 Ti uses 11 Gbps GDDR5X on a 352-bit bus, producing a bandwidth of 484 GB/s and there were no indications that it was bandwidth starved. The only video card I've ever had that was bandwidth starved was the GTX Titan X, which demonstrated double-digit performance gains just from overclocking the memory.

Navi is bandwidth starved. At 1080p it is equal to the 1080ti and VII. The VII and 1080ti pull away as resolution increases. Navi also gains the most performance from memory overclocks, it's just super unfortunate the memory tends to top out at ~925.
 
With a 14Gb/s memory on a 256 bit bus it would be very close to a 1.5x 5600xt at 12 GB/s memory in both Tflops and bandwidth. Using the leaked 15.2 GB/s 256 bit memory, it would be close to 50% greater than the 14 GB/s version of the 5600xt.

Still slightly bandwidth starved, but not terribly so. Performance on paper is again around that of the 2080 Super. This of course is ignoring console optimization, RDNA 2.0 enhancements and any software tricks like VRS.

https://www.techpowerup.com/264183/...na-2-h-w-accelerated-raytracing?cp=2#comments
 
By the time it comes out it's still going to be 2 generations behind PCs. Shame, I thought we might be getting something exciting in a console again like what Gen 7 brought us, but they're going to be just one step above a cell phone yet again.

With checkerboard 4k, you are actually looking at equivalent 1500p requirements (for 2160p CB) since you only calculate half of the pixels every frame and do a smart temporal interpolation for the half not calculated

With VRS in play, it looks like the Xbox Series X is targetting 2080ti territory
 
With VRS in play, it looks like the Xbox Series X is targetting 2080ti territory


2080super is the developers quote, you forgot that the 2080s do have vrs too.

I've never been glader to be wrong, I thought that they meant 2x Xbox x performance but they actually meant flops, and the ps5 is within 10% if this, the actual image quality will be most likely a good generational jump if only due to the much increased efficiency of rdna + vrs VS gcn 1.0 architecture.



Edit to add: in a current 3dmark test for vrs even Intel saw a 40% uplift from the technique with a simple 3 tiered approach, the easiest way to think of vrs is "mipmapping for shaders", you divide the image depending on the Z axis, the example given was that the first plane had shaders calculated at 1x1; middle plane was at 2x2 and background at 4x4 blocks, there's a small iq drop for a sizeable performance boost, working smarter instead of harder.
 
Last edited:
Back
Top