New Avatar PC Game Has a Hidden 'Unobtainium' Graphics Setting Targeting Future GPUs

Same, but just to stop morons from complaining about optimization, they should have obvious names, with a description that clearly states "these settings are not intended to be playable today at launch, and are included for the benefit of future players.

Call them Future+, Future++ and Future+++ or something.

Of course, one might argue, they can just launch with out them present, and patch the game when the time comes, but we all know major patches are unlikely more than a year or two after launch.

Also, presets are just presets. One could just leave the custom settings for those who really want to crank things up down the line, and not mess with presets at all.
Morons still complain when a game comes out requiring AVX instructions and clearly stating it in the requirements despite both Intel and AMD supporting them for at least a 12 years at this point. Some people absolutely refuse to upgrade their CPU.
 
Morons still complain when a game comes out requiring AVX instructions and clearly stating it in the requirements despite both Intel and AMD supporting them for at least a 12 years at this point. Some people absolutely refuse to upgrade their CPU.

The industry needs to stop enabling this shit. If you need a PC for just email and a web browser, that is fine, use whatever old thing you have around. It can even be a good experience. But if you want to play games, you should have to have a mid-range or higher machine from the last 5 or so years.

Anything less than - say a Ryzen 5 2600x with a Radeon RX580 or maybe a Core i5-9600 with a GTX 1060.

These should be the requirements for "very low" settings in 2023. Nothing lower than this should work at all in any games. They should just refuse to start, and come up with a system dialogue box telling you to upgrade.

It's frustrating that people who refuse to upgrade are holding the rest of us back. I blame the "we need every last player to make ends meet on our shitty free to play games with loot boxes and microtransactions" business model for enabling this crap.

Upgrade to at least mid grade hardware once every 5 years, or no soup for you. That should be the industry wide policy.
 
The industry needs to stop enabling this shit. If you need a PC for just email and a web browser, that is fine, use whatever old thing you have around. It can even be a good experience. But if you want to play games, you should have to have a mid-range or higher machine from the last 5 or so years.

Anything less than - say a Ryzen 5 2600x with a Radeon RX580 or maybe a Core i5-9600 with a GTX 1060.

These should be the requirements for "very low" settings in 2023. Nothing lower than this should work at all in any games. They should just refuse to start, and come up with a system dialogue box telling you to upgrade.

It's frustrating that people who refuse to upgrade are holding the rest of us back. I blame the "we need every last player to make ends meet on our shitty free to play games with loot boxes and microtransactions" business model for enabling this crap.

Upgrade to at least mid grade hardware once every 5 years, or no soup for you. That should be the industry wide policy.
I don't think it should be that extreme. I think developers just need to simply ignore the people who complain and admit to not meeting the published minimum system requirements just because "other games play fine" on their PC.
 
What Armenius said 100% + if you have to ask "how will this play on my PC?" ... just don't bother asking, because it probably won't do well. you either KNOW you have the power or you just don't have it.
 
Given the length of development for new titles, a game that started development on 1/1/24, should probably have a 4070ti as the minimum requirements. Unfortunately, what really happens is almost no one makes their engine anymore, so they pull a 5 year old engine to start and the software just stagnates.
 
EuroGamer interview with the developer

Digital Foundry's Alex Battaglia had a chance to interview two key figures in its technical development: Nikolay Stefanov, the game's technical director, and Oleksandr Koshlo, the render architect of the Snowdrop engine.

https://www.eurogamer.net/digitalfo...and-snowdrop-the-big-developer-tech-interview

Another thing I am proud of is the PC benchmark. It has very, very detailed graphs that I think you'll find interesting. We have profiling tags in our game that tell us how much time the ray tracing pass takes on the GPU, how long the G-buffer pass took, how long the post-processing pass took, etc. And there is a detail page where you'll be able to see all these things individually as part of the benchmark. We also support automation of the benchmark, so you can launch it through the command line and then it will give you all of these details in a CSV file. The benchmark will also go into CPU usage. So it will tell you how much time it took us to process agents, collision detection, etc, etc. So if you'd like stats and graphs, I think this one is going to be for you.
 
These should be the requirements for "very low" settings in 2023. Nothing lower than this should work at all in any games. They should just refuse to start, and come up with a system dialogue box telling you to upgrade.
I imagine that not what you mean as a general rules.

I think if your budget and market your target make it possible and if you have big ambition, say Alan Wake 2, it is ok to have high minimum, a lot of ps5-xbox X only game will have those.

If you want to target Malaysia, India and world markets and do fornite-pubg, among us type of numbers, with a game ambition that match that target hardware, that fine as well.

Given the length of development for new titles, a game that started development on 1/1/24, should probably have a 4070ti as the minimum requirements. Unfortunately, what really happens is almost no one makes their engine anymore, so they pull a 5 year old engine to start and the software just stagnates.
If you want the game to run on PS5 (if you think you will release it in time for a ps5 release to make sense), maybe has well make a game that can run on a ps5 (2070 super, 6700 or so min, maybe a touch higher specially if you do not have Series S to run on), but if it run on a ps5 hard to imagine not running perfectly fine (i.e. at least console fine, 35fps-1080p high details) on a 6700xt-3060 type of cards.
 
Last edited:
The industry needs to stop enabling this shit. If you need a PC for just email and a web browser, that is fine, use whatever old thing you have around. It can even be a good experience. But if you want to play games, you should have to have a mid-range or higher machine from the last 5 or so years.
Agreed, and even the now-old AMD Jaguar CPU, a low-power/embedded CPU from 2013, in the PS4, thin clients, etc. has AVX instructions.
While Sandy Bridge and older CPUs can carry most software and other games just fine, missing CPU instructions that modern software and operating systems require for essential functions cannot be ignored.

Even modern iterations of PFsense require AES-NI instructions.
Many individuals complained about this during the changeover a few years ago, and every x86-64 CPU since 2011 has had these instructions, so if one doesn't have them then it is certainly time to upgrade.
 
Agreed, and even the now-old AMD Jaguar CPU, a low-power/embedded CPU from 2013, in the PS4, thin clients, etc. has AVX instructions.
While Sandy Bridge and older CPUs can carry most software and other games just fine, missing CPU instructions that modern software and operating systems require for essential functions cannot be ignored.

Even modern iterations of PFsense require AES-NI instructions.
Many individuals complained about this during the changeover a few years ago, and every x86-64 CPU since 2011 has had these instructions, so if one doesn't have them then it is certainly time to upgrade.

To be fair, I'm all in favor of staying current, but I also question the value of AVX, especially considering how much it tends to drop the clocks. (or blast us with extra heat)

For 30 years we have been hearing that it is all about reduced instruction sets, heck, all modern x86 CPU's do is decode instructions to RISC-like micro-ops for processing, yet they keep adding additional instructions.
 
Last edited:
To be fair, I'm all in favor of staying current, but I also question the value of AVX, especially considering how much it tends to drop the clocks. (or blast us with extra heat)

For 30 years we have been hearing that it is all about reduced instruction sets, hack, all modern x86 CPU's do is decode instructions to RISC-like micro-ops for processing, yet they keep adding additional instructions.

Meanwhile ARM be like FJCVTZS
 
yet they keep adding additional instructions.
Apparently SIMD can be just that much faster it's felt to be worth it. I haven't looked in years but I recall seeing some articles doing timing analysis years ago that suggested the gains could be pretty big.
 
Apparently SIMD can be just that much faster it's felt to be worth it. I haven't looked in years but I recall seeing some articles doing timing analysis years ago that suggested the gains could be pretty big.

Just imagine how big the improvements could be by getting rid of the instruction decode overhead instead, and moving the work done by these special purpose instructions to a competent compiler? :p
 
Just imagine how big the improvements could be by getting rid of the instruction decode overhead instead, and moving the work done by these special purpose instructions to a competent compiler? :p

Is this not how we got VLIW like Itanium?

Just leave it to the compilers they said. It'll be good they said.

It was not good
 
Just imagine how big the improvements could be by getting rid of the instruction decode overhead instead, and moving the work done by these special purpose instructions to a competent compiler? :p
We've tried that, for a long time, it doesn't work. You can just build specialized silicon to be faster than more general purpose silicon. That's why we keep doing it. If you think you know how, then by all means please do it would be amazingly useful, but people have been trying ever since RISC became a thing CS professors couldn't shut up about, and to this day we build specialized silicon to make shit faster. Some of it is like AVX where it is still general use, but specialized in how it works like doing large vectors. Some is completely specialized like H.264 or AES hardware that does only that one algorithm but does it real fast with a small amount of silicon.
 
Just imagine how big the improvements could be by getting rid of the instruction decode overhead instead, and moving the work done by these special purpose instructions to a competent compiler? :p
Even the ARM guys seem to disagree. I dunno.
 
Apparently SIMD can be just that much faster it's felt to be worth it. I haven't looked in years but I recall seeing some articles doing timing analysis years ago that suggested the gains could be pretty big.

Literally a performance multiplier in the most ideal scenarios. I wouldn't be surprised if certain things are 5-6x the speed.

You're really contending with the data being in a friendly format - like contiguous data like pixels in an image is great, very scattered data is not. You'll need to waste time shuffling it around and you're simultaneously trashing your cache doing it.

So you can't just double down and vectorize absolutely everything like wow I'm going to make everything x times faster. It just doesn't really work/make sense for a lot of cases.

Modern compilers try to do auto vectorization, but they're not really spectacular and you're almost certainly hand writing the routine if you're chasing performance.
 
To be fair, I'm all in favor of staying current, but I also question the value of AVX, especially considering how much it tends to drop the clocks. (or blast us with extra heat)
AVX512 does this, but I haven't heard of AVX or AVX2 doing so.
These are getting to be older instructions, and certain functions in software do need them, just like MMX, SSE, SSE2, etc. with software requiring those in the 1990s and 2000s, of which I remember well.

For 30 years we have been hearing that it is all about reduced instruction sets, heck, all modern x86 CPU's do is decode instructions to RISC-like micro-ops for processing, yet they keep adding additional instructions.
Every RISC CPU in the last 20 years has been adding more and more instructions as well, and as needed for each platform and use-case scenario.
Today's RISC CPUs have far more instructions than yesteryear's CISC CPUs, and there is more to RISC than just "reduced instructions".
 
let's move onto what these unobtanium settings actually do in practice. The biggest visual difference comes primarily in resolution increases. For example, when fog volumetric lighting is set to max, we see far greater detail in the lighting and shadowing on intermediate fog volumes in near to mid-field of the camera, greatly enhancing realism. It's a similar story with cloud quality; at max, the amount of noise that can potentially occur in the clouds themselves is greatly reduced, although admittedly it already looks nice at high.

With shadow maps from the sun, the increased resolution is keenly seen with it set to max, where I thought the previous highest setting really didn't hit the heights it ought to have done. You can see the same with indoor spotlight shadows as well, with max reducing the aliasing that is seen on the lower settings.

The least important Unobtanium setting is the one for transparency, which subtly adds a few more objects into the game's cubemap reflections which partially update in real-time. These differences are scarcely visible even in side-by-side comparisons, and once again, I'd love to see transparency RT reflections instead of cubemaps in the future here.

For the RT settings, the max setting for diffuse lighting primarily upgrades the resolution of the effect. On the medium setting, for example, it looks like we are seeing both axes of the RTGI (ray-traced global illumination) effect being halved in resolution, which leads to fuzzy edges and weirdness occurring in the GI itself on top of a greyer, less defined look. At high, it almost looks like only one of the axes is half resolution, which leads to a lot of aliasing on vertical edges on top of a less defined look. The max setting looks to use the native input resolution here, making for pristine GI.

For specular GI or reflections, the max setting does not seem to increase resolution beyond that which is already offered by the ultra mode - in my testing, the amount of specular aliasing as we see here seems to be roughly the same as the ultra setting. When looking at very mirror-like reflections out of screen space, they resolve with identical levels of clarity, while very high is quite obviously lower resolution by comparison. However, the max setting for reflections does add skinned objects to reflections, which means things like soldiers, mechs, animals, Navi and more actually show up in reflections when not in screen space. This makes for fewer screen space errors in general and is a neat bonus for higher-end machines.

https://www.eurogamer.net/digitalfoundry-2023-avatar-frontiers-of-pandora-optimised-settings
 
A question: will the 7600 xt 16gb have better textures than the 6700 xt 12 gb 🤔



Our performance results show that there is no significant performance difference between RTX 4060 Ti 8 GB and 16 GB, which means that 8 GB of VRAM is perfectly fine, even at 4K. I've tested several cards with 8 GB and there is no stuttering or similar, just some objects coming in from a distance will have a little bit more texture pop-in, which is an acceptable compromise in my opinion.

https://www.techpowerup.com/review/...we measured over 15,once that is getting full.


My recommendation:

Grab the 6700xt/6750xt/6800 before they run out of stock !!
 
Back
Top