AMD™ Ryzen© Blender® Benchmark Scores™©® Thread

my computer got 1:31 for 200 samples not bad for a 6 year old rig. Now im ready for an update. this is a stock w3680 (a tired ES sample) with 12 gb ram running on windows 10.
'
edit just did it again with 100 got 46.37
 
Last edited:
by playing with the tile size i managed to get it down to 21.22 sec with 2 r9 290's @200 samples
 
reran with the render-er at 100. and got 32.41. not bad i guess? should we be noting our ram speeds, too? it seems to matter.

zem_100.png
 
reran with the render-er at 100. and got 32.41. not bad i guess? should we be noting our ram speeds, too? it seems to matter.

zem_100.png


i dont think it does i get 36 sec with dual e5-2670 @200 with 1333 mhz ram and around 18 sec @100
 
by playing with the tile size i managed to get it down to 21.22 sec with 2 r9 290's @200 samples
Bump the tile size to 400x400. I got my single GTX 780 Ti down to 16.87 seconds at 200 samples. Big GPUs will generally benefit around 256x256, but the final image is only 800x800, and the whole scene uses less than 300MB of RAM. Please make sure it is actually using the second GPU, since Blender is not exactly good at keeping settings when it comes to OpenCL rendering (in my limited experience).
 
Bump the tile size to 400x400. I got my single GTX 780 Ti down to 16.87 seconds at 200 samples. Big GPUs will generally benefit around 256x256, but the final image is only 800x800, and the whole scene uses less than 300MB of RAM. Please make sure it is actually using the second GPU, since Blender is not exactly good at keeping settings when it comes to OpenCL rendering (in my limited experience).

the fastest i could get was 18.52 @ 400*800 tiles ( it is using both gpus) from my understanding blender is significantly better on nvidia gpu's. Im still tempted to throw it on my 6 290 mining rig and give it a try but i think i wil hold back.
 
the fastest i could get was 18.52 @ 400*800 tiles ( it is using both gpus) from my understanding blender is significantly better on nvidia gpu's. Im still tempted to throw it on my 6 290 mining rig and give it a try but i think i wil hold back.
Oh, okay. I know the OpenCL rendering path is less mature, but I remember it wasn't too far behind the CUDA pathway, a little as one year ago (when I last made use of Blender). 1/2 the Last I remember, it could also make use of the CPU+GPU in OpenCL mode, too (which might not be beneficial for a tile of that size). I remember it had troubles with rendering with multiGPUs, unless if they were setup in a certain way, but it has been a while since I had my last multiGPU setup.

My GPU was not as friendly towards the 400x800 tile arrangement, netting 17.58 vs 16.87.

EDIT: I am wrong, it was another program that had issues utilizing two GPUs if SLI/CF was enabled.
 
Cpu scores:

E5-1607 V4 (oddball OEM Broadwell-E, quad core, no HT, 3.1 Ghz, no turbo)
200 - 2:43.46
100 - 1:19.87

5820K, stock speeds
200 - 1:04.26
150 - 48.59
100 - 32.75


both cpus tested on a Gigabyte X99 SOC Champion with 2 dead ram slots, so only dual channel ram
2 X 8GB Crucial Ballistix Sport LT DDR4 2400 with XMP enabled


edit: added 150 result because someone at AMD finally stated that 150 was the number used for the live stream demo
also this person said they used 2.78a X64, which is what I used for all my runs


 
Last edited:
Oh, okay. I know the OpenCL rendering path is less mature, but I remember it wasn't too far behind the CUDA pathway, a little as one year ago (when I last made use of Blender). 1/2 the Last I remember, it could also make use of the CPU+GPU in OpenCL mode, too (which might not be beneficial for a tile of that size). I remember it had troubles with rendering with multiGPUs, unless if they were setup in a certain way, but it has been a while since I had my last multiGPU setup.

My GPU was not as friendly towards the 400x800 tile arrangement, netting 17.58 vs 16.87.

EDIT: I am wrong, it was another program that had issues utilizing two GPUs if SLI/CF was enabled.
i just ran with 1 card at 800*800 tile size and got a 35 so it seams to scale fairly well. and the only reason i belive mine likes 400*800 is each gpu takes half
 
Sample rate changed to 150
AMD 960T unlocked to 6 core 3.0Ghz, DDR3 1333
Time 2:24.16
Time to upgrade.
 
Intel Xeon X5650 6c/12t
originally was at a very conservative overclock due to not really needing the power lol, but then ramped up the OC just for this(a very sloppy overclock for that matter, was just in a rush)

overclocked at 3.65 GHz where all cores turbo to 4.01 GHz simultaneously
Sample Rate Time
200 1:15.73
150 57.74
100 38.23

Not to bad for a processor i got for 60 dollars and a motherboard i got for 120. Cooled by my cheap hyper 212 evo.
Totally destroyed the haters on toms hardware that told me it was pointless and I was basically stupid for considering it, especially when I matched stock benchmark scores of the ridiculously overpriced 6700k. Of course you miss out on some features and the 6700k with a decent board can surpass mine. But hey, at least i didn't drop 400 dollars on this processor.
BTW came from an i5-3570K OC to 4.4ghz. the upgrade wasn't all that necessary, but it technically was almost free just from selling the old cpu, and I still have the motherboard I have to list on eBay. nevertheless, I really just wanted to do it, the joy of bringing back this old hardware, pushing it to its limits, getting some pretty damn good results, and here's the kicker, all while being broke as hell. It was extremely joyful. A lot more interesting than clicking a preset for an effortless overclock on my ivy bridge. (Useful yes, but where is the fun in that)
 
6700k@ 4.7ghz

200 - 1:05.57
150 - 48.90
100 - 33.15



 
Last edited:
I posted this yesterday, this was running 200 samples - before the 150 sample standard was revealed:

Fwiw, it took me 00:37.42 to run Ryzen 5960X @ 4.6 I'd post a screen grab, but with a 4K screen I doubt yoiu could see the little timestamp. And that's without touching anything but "Render".

Ran it again, 00:37.47, so pretty repeatable here.

I ran it at 100, just FS&G, and got 20 seconds. I'll run 150 at lunch. Also, the first runs were on 2.77. I loaded up 2.78a after work and got identical times - I don't think the Blender version is all that important, I think they were just trying to point users towards the most recent build.
 
i5 2500k @ 4.3Ghz, 1866MHz DDR3, default Blender settings using AMD provided preset file, fresh reboot.
Back to back runs showed 1:46.xx
Image file below shows a second slower, probably because i had Chrome and the forum open.
RyzenBlenderBench.jpg
 
Hi blender pro user here.
I did have to make an account when i did find this thread and it really was interesting to read.
Some off you where wondering your results.

Here's list off stuff that can make big difference in render time.
All under this comes from top off my head so you might wanna check if my information is still valid.
Can't post links cos "please do not post any links until you have 3 posts. If you do do post links, your post will be rejected."

1. OS. Windows 10 is slowest of them all. Then comes Windows 7 / 8 as about 5-15% faster than 10. Linux "Ubuntu" around 5-15% faster than 8 and 7.
2. Blender version. This can have huge impact for render speed. After 2.49 there has been variation over 20%
3. 980 ti is really slow on windows 10 for some reason and for my knowledge has been there since 2.75 blender version. Also effect old Titan X
4 Tile size. Basic rule of thumb if you render CPU smaller tile size = faster render time and on GPU bigger tile size equals faster render time. CPU 16x16,32x32, 64x64. GPU128x128 256x256, 512x512


Here's my result and my rig is build for blender in mind.
All watercooled in single loop whit Dual D5 and 2x480 Ek Rads

CPU i7-5930 K Clock Speed 3.99 GHz
4x 970 Gtx Asus Strix
Os windows 10 and Blender V 2.78

New Blend File in use
Cpu 200 Samples = 01:02.51
Cpu 150 Samples = 00:47.81
Cpu 100 Samples = 00:32.17

Gpu 200 Samples = 00:38.47
Gpu 150 Samples = 00:29.39
Gpu 100 Samples = 00:20.18

Then i changed tile size optimal for GPU 256x256
Gpu 200 Samples = 00:07.46
Gpu 150 Samlpes = 00:06.11
Gpu 100 Samlpes = 00:04.49

If you have any questions about Blender. I answer if i can :)
 
5960X @ 4.6 This browser window minimized during the runs.

100 Samples 19.30

150 Samples 28.52

200 Samples 37.82

I tried CUDA and OpenGL Select in the User Preferences > System settings, but on 200 Samples my score was still 37.60 ~ 37.81 so I don't think it changed over to GPU or Titan X M SLI runs about the same as the 5960X.

ETA: Looks like this was 800 x 800, that's the default of the file and I didn't want to change anything but the sample size.
 
Here is mine: Dual Xeon E5 2695 V2 at 2.4GHz (12C24T x 2)
150 Samples 19.88

With video





[*edited to show the benchmark with the correct 150 samples]
 
Last edited:
5960X @ 4.6 This browser window minimized during the runs.

100 Samples 19.30

150 Samples 28.52

200 Samples 37.82

I tried CUDA and OpenGL Select in the User Preferences > System settings, but on 200 Samples my score was still 37.60 ~ 37.81 so I don't think it changed over to GPU or Titan X M SLI runs about the same as the 5960X.

ETA: Looks like this was 800 x 800, that's the default of the file and I didn't want to change anything but the sample size.

Did you also change your render Device under Render Tab?

upload_2016-12-15_20-38-41.png
 
No, I'm not really very well versed in Blender TBH. I'll try it later, lunch is about over.

And I don't appear to have that option. I have the "Full Screen", the "Supported", but nothing under that.
 
i5 4460 stock 3.2Ghz - 3.4Ghz
ASRock H97M ITX/AC
16GB DDR3 1866 - motherboard locked to 1600 with 8 - 9 - 9 - 24 - 1T
Blender v2.78a
150 samples @ 1:58.44
 

Attachments

  • RyzenGraphic_27.png
    RyzenGraphic_27.png
    391.1 KB · Views: 35
4790K @ 4.7 DDR3-2400 10-10-12-1T, latest *.blend file, blender 2.78a x64

150 samples: 54.9 seconds
100 samples: 36.2 seconds
200 samples: 73.8 seconds

Using the 1080FE: 9.53 seconds (!)
 
Last edited:
Agreed. I know you guys did a lot more with overclocking the 6950x than I did, but from everything I've read and from my experience, the 6950x seems to go off a cliff past 4.3. Which I don't think gets appreciated, at least for highly threaded stuff, it's an incredible CPU which would have been far better regarded but it's clear that Intel was gouging because of lack of competition. No one is really a fanboy unless they want to pay more for stuff. I want Ryzen to be great. That's good for everyone.
When I say a stable 6950X overclock, I am referring to a fully threaded workload that extends for at lest 2 hours. In the past year or so, I have moved away from the synthetic and generally use Handbrake. The latest versions of ASUS Real Bench are actually very good as well and push the CPUs beyond Handbrake workloads. But yeah, I have never gotten the 6950X past 4.3GHz with a real world workload.
 
Problem is that we all want AMD to do well so Intel is forced to either stop gouging, or actually show worthwhile improvements with each release, but then we are all going to go out and buy that Intel part, because that's what we really want. we want AMD to make Intel honest again, but unless we get another K7/K8, we are still going to buy what ever has the performance, which likely will still be Intel.
 
Problem is that we all want AMD to do well so Intel is forced to either stop gouging, or actually show worthwhile improvements with each release, but then we are all going to go out and buy that Intel part, because that's what we really want. we want AMD to make Intel honest again, but unless we get another K7/K8, we are still going to buy what ever has the performance, which likely will still be Intel.

If AMD is even close to Intel, I'd go so far as to say if its within 20% per core compared to current offerings I will change to it no problem just because I prefer AMD. My only issue right now is most of my games are heavily single threaded and performance suffers accordingly if I use an FX chip. I'm really excited for the totally reworked cache and memory subsystems though, that is way too long overdue.
 
Fx 9590 system (found cause for the crashing earlier, a startup program in the background)

150 samples. . . Cringe worthy being at 5.07ghz - - 1:41.79
An non OC nor Turbo Zen is 3x + faster! This is very much encouraging and time to upgrade when that option opens up.
 
Xeon 5660 owner here, more details in sig. My result is 55:15 with the new Blender file.

The old boy still has some life left in it :D. The only reason I think about a possiblity of upgrading to Ryzen really is the M.2 / Sata 3 option for fast loading times. If my Mobo will live up to it's 10th birthday (it's ver. 1.0, bought the system in December 2008) I'll probably skip the current / next gen and go for whatever comes out in 2019. Don't really feel the need for anything faster than my current Xeon, but it does make life a little boring.
 

Attachments

  • render good.jpg
    render good.jpg
    504.5 KB · Views: 83
I have run benchmark (100 samples) on MBPr 15" ( 4750HQ - 2.0Ghz - 3.2Ghz Turbo) and I get : 43.84 seconds. It seems a bit 'high' for a laptop? Especially 3 yrs old entry-level machine.
 
I have run benchmark (100 samples) on MBPr 15" ( 4750HQ - 2.0Ghz - 3.2Ghz Turbo) and I get : 43.84 seconds. It seems a bit 'high' for a laptop? Especially 3 yrs old entry-level machine.

The 'official' samples number is 150. AMD made it known that 150 samples were used for the demo on stage.
 
48 core quad Opteron 61xx machine (SuperMicro H8QGi-F w/48 K10 cores overclocked to 3.0 GHz/memory at DDR3-1333)

24.61 seconds... samples set to 200

18.72 seconds... samples set to 150

16.23 seconds... samples set to 128

12.86 seconds... samples set to 100
 
Back
Top