Crytek Releases Hardware-Agnostic Raytracing Benchmark "Neon Noir"

3440 x 1440 gets me 3,844..... so just under half of what I get at 1920 x 1080.

It's a nice looking benchmark if nothing else. Got some nice screen grabs out of it for wallpapers. ;)

Since 1080p is 2megapixels and 3440x1440 is 5, that sounds about right. Also sounds like it scaled nicely.

I have to try this benchmark.

Someone earlier said reflections looked blurry. Personally I think games so far with RTX reflections are too good. I’d rather more FPS and more blurry reflections (probably closer to real life...).
 
Since 1080p is 2megapixels and 3440x1440 is 5, that sounds about right. Also sounds like it scaled nicely.

I have to try this benchmark.

Someone earlier said reflections looked blurry. Personally I think games so far with RTX reflections are too good. I’d rather more FPS and more blurry reflections (probably closer to real life...).

I mentioned blurriness... but if you looked at my card I am running a 760gtx lol
 
Holy carp, it's 1gb for the engine and then the freaking bench is another 4.4gb?

Anyways...

1080p
GameLauncher_2019_11_16_19_21_54_060.jpg
5760x1080p
GameLauncher_2019_11_16_19_27_33_006.jpg
 
Last edited:
I finally decided to run the benchmark and damn this thing looks great. Too lazy to post screenshots but I ran this with a 2080 Super and 9900k.

1080p Ultra-12474
1440p Ultra-7744
 
Last edited:
Very impressive for a software only solution.

The part I find interesting is that Crytek may be able to find ways to use RT hardware to better 'inform' their software / shader solutions to come up with a way to use RT hardware to improve visuals without impacting performance as much as using RT hardware directly.
 
1920x1200 (CPU is currently at 4.8Ghz)
Edit: My bad, that 7221 I posted must not have been on Ultra, but something is amiss with this benchmark:

CPU @ 4.8Ghz:
NeonNoir1920x1200 4.8Ghz.png

CPU @ 5Ghz (and I'm 100% sure it's not throttling):
NeonNoir1920x1200 5Ghz.png

I tried this 3 times and 4.8Ghz was higher every time. WTF? :confused:
 
Last edited:
RTX 2070, 1440p. Realized I had Sharpening and Gsync on so turning them off netted a small gain.

If it was a game, it would be totally fine by these results. FPS is not very consistent tho, Lowest was 47, highest touched the 90's.

wi Screenshot 2019.11.18 - 09.31.48.67.png
 

Attachments

  • Neon_noir_ray_tracing_benchmark_2527 Screenshot 2019.11.15 - 17.59.28.53.png
    Neon_noir_ray_tracing_benchmark_2527 Screenshot 2019.11.15 - 17.59.28.53.png
    1.3 MB · Views: 0
Its a cool demo and all but like all benchmarks I need to see it in an actual title first for it to have context I mean just looking through the posts I see a score of 6000 which yields 60fps and another just shy of 10,000 which again seems to yield 60fps at the same resolution. So I look forward to somebody releasing a game using this because it would probably be beautiful, yet sadly I don't know of any titles coming out that are using Cryengine ont he horizon.
 
Its a cool demo and all but like all benchmarks I need to see it in an actual title first for it to have context I mean just looking through the posts I see a score of 6000 which yields 60fps and another just shy of 10,000 which again seems to yield 60fps at the same resolution. So I look forward to somebody releasing a game using this because it would probably be beautiful, yet sadly I don't know of any titles coming out that are using Cryengine ont he horizon.
The 60fps you see on that screen has nothing to do with the score.
 
For the hell of it I re-ran this at 1080p for comparison sake (last time was 1200p).

My results were very inconsistent before so I cooked my GPU with Furmark until the temp stabilized at 76C and the GPU clocks stabilized at 2037Mhz core, 5454Mhz memory (not too shabby for an Armor OC with the stock cooler).

I then opened the benchmark (which, oddly, bumped the core clock up to 2050 ???) and set it to 1080, ultra, fullscreen, then alt-tabbed to Furmark and closed it, and immediately went back and clicked start benchmark. It was a lot more consistent this way.

nior run 3.png

That was on the 3rd run.

1st: 7256
2nd: 7054 (dunno what happened there)
4th: 7263

So... yay? lol

P.S. I'm running 1/2 the memory now too in single channel - RMA happening soon.
 
Just checking in: Is there a version which doesn't require the installation of a wholly unrelated app in order to launch the app that launches the app? Or should I continue to ignore this?
 
Just checking in: Is there a version which doesn't require the installation of a wholly unrelated app in order to launch the app that launches the app? Or should I continue to ignore this?
You can launch it without the launcher, but you need the launcher to download it.
 
Finally remembered to run this.

9277 on the default settings: While running it was capped at 100fps, my LCDs' overclocked refresh rate. Clicking settings showed this to be 1600x900. Raytracing Ultra, fullscreen no. My CPU is oc'd to 4.0 Ghz (wasn't the best overclocker)
upload_2019-12-29_13-57-23.png

upload_2019-12-29_13-59-13.png


1920x1080, Raytracing Ultra: fps still at 100fps throughout. 9292
upload_2019-12-29_14-4-48.png


3440x1440, Raytracing ultra. FPS was ~60 to highs of upper 90's: 7475

upload_2019-12-29_14-9-10.png


Lastly, did 3440x1440, Raytracing Ultra, Fullscreen:Yes, got 7849. The lowest FPS I noticed was when the shell casings were being scanned it dropped down to 66 FPS. Second run to get a screenshot of that point in the bench it only dropped to 67, score was slightly higher at 7856:
upload_2019-12-29_14-18-10.png


Pretty good I think for my older CPU, which is technically a (5th gen i7) even though it is i7-6xxx (Broadwell). Was getting about 250 lower than a Ryzen 3950X with same GPU, or 2.8% less. No need to upgrade anytime soon (so good that I don't need to spend the money, bad in that I will be waiting to do a new build, because it's fun)
 
Last edited:
  • Like
Reactions: Auer
like this
Very impressive for a software only solution.

The part I find interesting is that Crytek may be able to find ways to use RT hardware to better 'inform' their software / shader solutions to come up with a way to use RT hardware to improve visuals without impacting performance as much as using RT hardware directly.
After finally trying out Control, I was thinking this exact sort of thing. The reflections in Control seem fairly cheap to do on RT hardware. The real big hit is the RT based ambient occlusion. Which does look waaaay better than any other ambient occlusion I have seen. But indeed, big hit on the hardware.

So yeah, there's probably a near future here, where game engines split the load between RT hardware and traditional shaders for RT, to find a balance.

Also, DX12 needs a huge overhall. As that alone is big peformance issue.
 
Also, DX12 needs a huge overhall. As that alone is big peformance issue.

Game engines using DX12 today mostly need overhauls. Nothing wrong with DX12 so much as most of the games using it are running on engines that were designed with DX10 / DX11 in mind from the ground up.

DX12 is similar only in name. This new round of consoles presents an opportunity to for the needed paradigm shift in development effort to really take place, as DX12 and Vulcan both arrived after the major engine houses had already 'ported' their engines over to the last console generation.

At least with respect to the Xbox, more parity should be kept between console and other codebases, including mobile and Linux, both of which should be fairly straightforward given how similar DX12 and Vulcan are.

So yeah, work does need to be done, but a lot of it will be done as a 'matter of course', I think ;)
 
Ran it on my system today for those who care.

Details:

Ryzen 3700X, PBO enabled, stock cooler
16GB DDR4, 3600MHz CAS 16
GeForce GTX 1070 Ti @ 1974 MHz core/8750 MHz VRAM

Ultra ray tracing set.

1080p run scored 6583
1440p run scored 3961
 
Strange, RDNA seems to be seriously under performing on this bench. The 5700XT should be up near the 2070 super not battling the OG 2060.

Vega and Pascal are struggling too. Long live Turing?

Long live Ampere 😜
 
Not looking great for AMD there. The 5700 XT falls pretty far behind the 1080 Ti even, and the Radeon VII is way down. What happened?
 
Does the bench take advantage of hardware RT if available? Would make more sense if so.
 
Does the bench take advantage of hardware RT if available? Would make more sense if so.
The whole point of this software is that it can run an assed-up version of software RT to approximate the effect for the poors.

Seems AMD is behind in RT approximation as well... ;)
 
Dam the 1080 is getting smoked, no cool.
Their 1080 is either broken or a mislabeled 1060. Here's what I get:

1080p - Ultra:
1080-final.png

1440p - Ultra:
1440-final.png


... which puts a 1080 somewhere between a 5700 and a 2060 vanilla - roughly where one would expect the 1080 to be.

Edit: Ran it with GSync disabled and got slightly better results.
 
Last edited:
Updated results on my new cpu/mobo/gpu - CPU is not OC'd at all yet, but I doubt it has much impact. Not sure that the new cpu even has any impact vs the old i7-6850k. Still using the same LCD that is max 100 fps, but while running the bench this time, the reported fps mostly hovered around 120fps. Ahh, got the max fps setting in nvidia control panel turned off atm.

This is the default settings run: 37% faster than my 2080Ti run that was at these same settings. The 2080Ti was factory oc'd ftw3 with +100 gpu oc, and +750 on the vram.
1631984291833.png

1631984318905.png


And a fullscreen, 3440x1440 run, Raytracing still Ultra: I had to turn full screen off after the run to be able to use snipping tool, but it was Fullscreen during the run. This is 22% faster than my 2080Ti run.
1631984724976.png


The Neon Noir had to do an update, but the build number stayed the same. So no idea what the update did.
I think if I had a 3080Ti, might have been able to hit even higher numbers. 24Gb ram uses (guesstimated)100W, so a 3080Ti might have 50W more headroom, depending on the power supply circuitry. But for the PCIe slot power + 12pin power, there is an upper limit, so that 50W being available to the GPU would be 21% more juice than the GPU currently uses.
These numbers are from GPU-z, setting the sensors to record the max power draw for board total, and for the GPU specifically. The Board total draw peaks around 364W, and the GPU alone peaks at 240W. That leaves 124W for the board itself + 24Gb GDDR6X. No idea what the board itself drains off... probably 25ish watts (total guess) in heat lost from the VRM's. That's where the guesstimate of the ram draw of 100W total for 24Gb comes from, and the 50W on 12Gb cards, i.e. a 3080Ti.
GDDR6X ram power draw goes to about 68W estimated per 12Gb, when overclocked +750. Note that the ram is underclocked -750Mhz (or was it 1000?) and runs by default at 9750Mhz simply to keep heat and power drain lower. On a 3090 You can do +1000 pretty easily, which is a 10% oc, if you have done a thermal pad replacement.
Be sure to check benchmark results from ram speed baseline (no oc), then to +500, +750, +1000 to ensure you are getting performance gains. Myself I see a performance drop at +1000 compared to +750, so be sure to dial it in and not assume highest stable speed also means highest performance.

Edit: re-ran it with the frame limiter turned back on, set to 98, and the 1600x900 score dropped from 12750 to 9100 ish.. So be sure to turn frame rate limiters off.
 
Last edited:
Updated results on my new cpu/mobo/gpu - CPU is not OC'd at all yet, but I doubt it has much impact. Not sure that the new cpu even has any impact vs the old i7-6850k. Still using the same LCD that is max 100 fps, but while running the bench this time, the reported fps mostly hovered around 120fps. Ahh, got the max fps setting in nvidia control panel turned off atm.

This is the default settings run: 37% faster than my 2080Ti run that was at these same settings. The 2080Ti was factory oc'd ftw3 with +100 gpu oc, and +750 on the vram.
View attachment 396203
View attachment 396204

And a fullscreen, 3440x1440 run, Raytracing still Ultra: I had to turn full screen off after the run to be able to use snipping tool, but it was Fullscreen during the run. This is 22% faster than my 2080Ti run.
View attachment 396205

The Neon Noir had to do an update, but the build number stayed the same. So no idea what the update did.
I think if I had a 3080Ti, might have been able to hit even higher numbers. 24Gb ram uses (guesstimated)100W, so a 3080Ti might have 50W more headroom, depending on the power supply circuitry. But for the PCIe slot power + 12pin power, there is an upper limit, so that 50W being available to the GPU would be 21% more juice than the GPU currently uses.
These numbers are from GPU-z, setting the sensors to record the max power draw for board total, and for the GPU specifically. The Board total draw peaks around 364W, and the GPU alone peaks at 240W. That leaves 124W for the board itself + 24Gb GDDR6X. No idea what the board itself drains off... probably 25ish watts (total guess) in heat lost from the VRM's. That's where the guesstimate of the ram draw of 100W total for 24Gb comes from, and the 50W on 12Gb cards, i.e. a 3080Ti.
GDDR6X ram power draw goes to about 68W estimated per 12Gb, when overclocked +750. Note that the ram is underclocked -750Mhz (or was it 1000?) and runs by default at 9750Mhz simply to keep heat and power drain lower. On a 3090 You can do +1000 pretty easily, which is a 10% oc, if you have done a thermal pad replacement.
Be sure to check benchmark results from ram speed baseline (no oc), then to +500, +750, +1000 to ensure you are getting performance gains. Myself I see a performance drop at +1000 compared to +750, so be sure to dial it in and not assume highest stable speed also means highest performance.

Edit: re-ran it with the frame limiter turned back on, set to 98, and the 1600x900 score dropped from 12750 to 9100 ish.. So be sure to turn frame rate limiters off.


Something doesn't seem right with your 1600x900 results, because my 3060ti is beating that... Could something be forcing VSync on your system?


3700X (stock-ish) || 3060 Ti (coreclock +210mhz; memclock +1000mhz; undervolted) || 32gb RAM (DDR4-3733 CL16)

2560x1440 Ultra Fullscreen: 7354
1600x900 Ultra Fullscreen: 13840
 
Have a 3970x and a Kingpin 3090.
Set my benchmark at my screen resolution which is 5120x1440.
Score was 6949
 
Something doesn't seem right with your 1600x900 results, because my 3060ti is beating that... Could something be forcing VSync on your system?


3700X (stock-ish) || 3060 Ti (coreclock +210mhz; memclock +1000mhz; undervolted) || 32gb RAM (DDR4-3733 CL16)

2560x1440 Ultra Fullscreen: 7354
1600x900 Ultra Fullscreen: 13840
Hmm, possibly. Could also be something else. When I oc the vram to +1000 my score drops. I was also seeing some fluctuations between runs.
I have a 10900x not oc'd at all. 3090 is +100 on the gpu, +750 on the vram. Might be certain settings in the gpu control panel require a reboot. I turned the fps cap in the nvidia control panel off, but in a run I just did it stayed capped at 97-98.
Could be one of these other settings:
1632113161110.png

Trying to get it to un-cap the fps...

Edit: Might have found a bug in the drivers. If you turn off the fps cap, you gotta reboot for it to actually un-cap.
Rebooted, re-ran, got:
1632114454969.png

And this was after also turning Afterburner settings down to get a baseline, It was like +10 on the gpu and vram. Then re-applied the +100 on the gpu, +750 on the ram, re-ran and it only went up to 13080.

But your result could be the cpu. This isn't an RTX or DXR based benchmark, its Cryengine's own raytracing that they wanted to show could be done without the special hardware.
 
Last edited:
Back
Top