Post your Ryzen memory speeds!

Well I had no luck with 3466 tonight. Tried magictoaster's timings and noko's tip, neither resulted in anything different unfortunately.

I've also figured out the ratio at which the tRFCs are determined. Posibly useful for Titanium folks since on Auto, it uses the same value for all entries.
Take tRFC divide by 1.34 = tRFC2
Take tRFC2 divide by 1.625 = tRFC4

Might be useful to those of us whose boards just plop in tRFC's value into all of them. (Granted, doesn't seem to be hurting performance, and seems to require less voltage on the IMC, but... there it is at least :) )

Over at overclock.net someone suggested the following adjustments to get a stable system @3466:

- SOC: 1.5 1.15, not 1.5, sorry!
- PROC_ODT: 60
- DRAM: 1.4 (same for boot)

Using those settings I'm able to get everything stable at 3466 @ 14-14-14-14-34 (T1). Might be worth a try. (Actually, looks like I can keep SOC and PROC_ODT on Auto and only set DRAM to 1.4 and get everything stable).

On a side note: I'm not in anyway a pro at this. I'm mostly just looking at what is working for other users with similar configuration and trying things so YMMV. ;)
 
Last edited:
Over at overclock.net someone suggested the following adjustments to get a stable system @3466:

- SOC: 1.5
- PROC_ODT: 60
- DRAM: 1.4 (same for boot)

Using those settings I'm able to get everything stable at 3466 @ 14-14-14-14-34 (T1). Might be worth a try. (Actually, looks like I can keep SOC and PROC_ODT on Auto and only set DRAM to 1.4 and get everything stable).

On a side note: I'm not in anyway a pro at this. I'm mostly just looking at what is working for other users with similar configuration and trying things so YMMV. ;)
Yeah, I've found dram voltage actually helps a good bit (to a point), but soc is only marginally helpful, if at all (though, I only have offset, and no LLC setting on my board for now). SoC is mostly useful if you change bclk, though I'm not sure which specific areas it helps stabilize.
 
One of the more important settings that few people mention Bank Group Swap should be disabled if you have only 2 dimms and they are single rank. It helped me reach 3333mhz at on DDR4 3200 memory. Before I could only reach 3200mhz. My timings at 3333mhz are 14-14-14-14-35 CR2 . MY SOC 1.075 volts , Dram voltage 1.39 volts.
 
Over at overclock.net someone suggested the following adjustments to get a stable system @3466:

- SOC: 1.5
- PROC_ODT: 60
- DRAM: 1.4 (same for boot)

Using those settings I'm able to get everything stable at 3466 @ 14-14-14-14-34 (T1). Might be worth a try. (Actually, looks like I can keep SOC and PROC_ODT on Auto and only set DRAM to 1.4 and get everything stable).

On a side note: I'm not in anyway a pro at this. I'm mostly just looking at what is working for other users with similar configuration and trying things so YMMV. ;)
SOC 1.5v is way too high, ASUS recommend no more 1.2v, AMD recommends 1.1v.
 
One of the more important settings that few people mention Bank Group Swap should be disabled if you have only 2 dimms and they are single rank. It helped me reach 3333mhz at on DDR4 3200 memory. Before I could only reach 3200mhz. My timings at 3333mhz are 14-14-14-14-35 CR2 . MY SOC 1.075 volts , Dram voltage 1.39 volts.
Seeing as you have the same board and, basically, same RAM as I do.... I'm surprised you need that much voltage, or CR2. If your kit is a 14-14-14 kit then yours should technically be binned better than mine as well, as mine are 15-15-15.
Either way... for 3333 (on 1.72), 15-15-15-35 1T everything Auto (or even with ODT at 48Ohm), 1.36V or 1.35V. CPU NB with at least 1V.

Now, granted, in my limited testing so far, I've not really seen ANY change in performance between timings of 14s, 15s or 16s; nor by using CR 1T or 2T. BUT my testing has been quite limited to AIDA64, so it's not exactly 'real world'. I do feel like CR 1T might be of more benefit for cache/cross-CCX performance, and might offset any loss incurred through upping timings to 15s if that's what it takes to get things functional at 3333.

Based on my 3333 timings screenshots, is your Titanium applying the same Auto settings to your FlareX? If anything is vastly different, could you post what they are (or screenshots)? OH hey, do the A-XMP profiles still work for you? Mine have ceased to, but I assumed that it's because of subtimings being unlocked and allowed to be changed. If it doesn't still work.... I'd be interested in a SPD Tables dump (TyphoonBurner works) if it wouldn't be too much trouble? :shame:

EDIT: OK so... scratch that part about SPD. Today it seems to feel like working >_> Though, oddly, back on like BIOS v1.5, the Profile 1 was 2933, an Profile 2 was 3200. Now they're just both 3200. I'd still be interested to see what your's are. My Timings page is no different using the A-XMP profile, than it is with me simply setting 3200 and leaving auto timings :|
 
Last edited:
Tried the new AGESA 1.0.0.6 for the Prime X370. Unfortunately it seems 3333 C14 1T is the best I can do with my 3600 RAM and that required pumping 1.4V into the RAM and 1.1V into the SOC. I haven't had time to run a proper multi-hour memtest yet so I don't even know if that's going to be 100% stable.

3466 fails memtest after 2 - 5 seconds, and 3600 won't boot (Windows starts complaining about corrupted files etc.). Went as high as 1.15V on the SOC and tried CAS16 and 18, but it still wouldn't work.
 
lol, yeah I figure it might have been a mistype, didn't want anyone to plug that in and watch their board go pufff.

Hopefully nothing was destroyed! :D

Some people over at overclock.net go way past 1.5 (up to 1.8 +). Granted they do not appear to do this for long period of time but still. I'm not sure this will en well for them...
 
I thought AGESA 1.0.0.6 was promising up to 4000mhz on the memory. All these memory issues is why I've waited this long for skylake x
 
I thought AGESA 1.0.0.6 was promising up to 4000mhz on the memory. All these memory issues is why I've waited this long for skylake x

highest i've seen tried and successful so far was 3800, i don't think i've seen any posts with people even using ddr4 4000 to actually try it.
 
I thought AGESA 1.0.0.6 was promising up to 4000mhz on the memory. All these memory issues is why I've waited this long for skylake x
I doubt anyone was promising anything :p I mean, I wouldn't have. AMD has only 'promised' what they outlined. 2x Single Rank DIMMs at 2667, and quad DIMM or Dual Rank is a crap shoot. Now, what I do know was said is there'd be up to DDR4-4000 would be available, and that promise has been delivered on, and then some. In addition to speeds >3200, we also were given 3033 and 2800 for speeds, which I think was just as awesome. Means that people won't HAVE to BCLK to get the most out of their memory if they've purchases lower binned/rated kits.

Seems that 3333 has been quite easy for anyone already having no problem with 3200, and 3466 is a bit harder, 3600 is a blessed situation, and beyond that is "ignore my results, I seriously just know what the hell I'm doing!" lol
 
I thought AGESA 1.0.0.6 was promising up to 4000mhz on the memory. All these memory issues is why I've waited this long for skylake x

Well the dividers are there for 4000 MHz. Doesn't mean it will actually work :p Even on Broadwell-E, with quad-channel , getting 4000 MHz out of the memory is a challenge

Good, low latency 3200 MHz memory is still the sweet spot for price/performance as higher speeds require more expensive memory for just a small performance gain. Above 3200, the performance gains are smaller per-MHz and you have to work harder for them. The new AGESA makes it much easier to achieve 2933 - 3200, and if you have really good RAM and you're willing to tweak a bit, speeds above 3200 are possible. The new intermediate dividers also help. For example, if you can't quite achieve 3200, you now have 3067 as an option, instead of having to step it down all the way to 2933.
 
I tried to get 4000 Mhz. I could boot to BIOS, but not into windows
at 3600 I could get into windows and run aida memory bench, but couldn't stress it. I went back to 3200 mhz. Oh well, maybe some other day!
 
I could run 3200+ with my 2800 hynix m-die chips if I waited around 30 minutes for it to do memory training (voltage set to 1.4 manually), and it was stable in memtest86, but my board has no LLC adjustment and I don't see the startup voltage setting so (unless I want to wait that long every time and forget about suspend/sleep) it's not a realistic setting. :/
 
I could run 3200+ with my 2800 hynix m-die chips if I waited around 30 minutes for it to do memory training (voltage set to 1.4 manually), and it was stable in memtest86, but my board has no LLC adjustment and I don't see the startup voltage setting so (unless I want to wait that long every time and forget about suspend/sleep) it's not a realistic setting. :/
Just found the "training voltage" setting. It's CLDO_VDDP on the gigabyte gaming K5.
Boots fine at 3200MT now! :D (just need to get it stable...)
Edit: CPU-Z Validation, screenshot (AIDA64 2hr Stable):
CPUz375_3200cl18.jpg
 
Last edited:
highest i've seen tried and successful so far was 3800, i don't think i've seen any posts with people even using ddr4 4000 to actually try it.

I have ddr 4000 on hand and haven't had any luck getting 3600 or above- I probably need to get schooled more on what else to tweak. They're F4-4000C18D-16GTZR sticks, which I don't think I've seen anyone over at the overclock forums use. Managed to get it to detect 3466 just fine. Running on the 9945 BIOS at the moment.
 
Tried the new AGESA 1.0.0.6 for the Prime X370. Unfortunately it seems 3333 C14 1T is the best I can do with my 3600 RAM and that required pumping 1.4V into the RAM and 1.1V into the SOC. I haven't had time to run a proper multi-hour memtest yet so I don't even know if that's going to be 100% stable.

3466 fails memtest after 2 - 5 seconds, and 3600 won't boot (Windows starts complaining about corrupted files etc.). Went as high as 1.15V on the SOC and tried CAS16 and 18, but it still wouldn't work.

Well I'm back to 3200 C14. It's really odd how it behaves at anything above this speed.
For example, at 3333 C14, it can run Memtest86+ for at least a couple of hours (until I got bored and canceled it), but it would fail AIDA64 "Stress system memory" after anything from 8 minutes to 1 hour.
So I reduced it to 3333 C16 and this seemed stable. Ran AIDA64 for 2 hours yesterday. But today, it failed after 8 minutes. It never actually crashes, it just displays "Hardware error detected" or something.

So why is 3333 C14 Memtest86+ stable, but not even 3333 C16 works reliably in AIDA64? Either AIDA64 is just way more demanding on the memory than Memtest86+, or there's some bug or glitch with the new dividers that causes it to think there's a hardware failure when there isn't. I'd guess it's simply more demanding and uses a more stressful algorithm.

It's also weird that 3200 C14 only requires 1.0V SOC and 1.35 RAM, where as 3333 C16 apparently isn't even stable with 1.15V SOC and 1.4V RAM. Why such a huge difference for 133 MHz and loosened timings?

I'd suggest anyone who wants to make sure they have stable memory run AIDA64 memory test for over 2 hours, maybe as long as 12 - 24 hours... Memtest86+ doesn't seem to be enough.
 
Last edited:
Well I'm back to 3200 C14. It's really odd how it behaves at anything above this speed.
For example, at 3333 C14, it can run Memtest86+ for at least a couple of hours (until I got bored and canceled it), but it would fail AIDA64 "Stress system memory" after anything from 8 minutes to 1 hour.
So I reduced it to 3333 C16 and this seemed stable. Ran AIDA64 for 2 hours yesterday. But today, it failed after 8 minutes. It never actually crashes, it just displays "Hardware error detected" or something.

So why is 3333 C14 Memtest86+ stable, but not even 3333 C16 works reliably in AIDA64? Either AIDA64 is just way more demanding on the memory than Memtest86+, or there's some bug or glitch with the new dividers that causes it to think there's a hardware failure when there isn't. I'd guess it's simply more demanding and uses a more stressful algorithm.

It's also weird that 3200 C14 only requires 1.0V SOC and 1.35 RAM, where as 3333 C16 apparently isn't even stable with 1.15V SOC and 1.4V RAM. Why such a huge difference for 133 MHz and loosened timings?

I'd suggest anyone who wants to make sure they have stable memory run AIDA64 memory test for over 2 hours, maybe as long as 12 - 24 hours... Memtest86+ doesn't seem to be enough.
Memtest86 shifts the test pattern for each pass, so the more passes you run the better. AIDA64 may be stressing (other parts of) the SoC in addition to the memory controller, so if it's unstable that could cause a fail, too. You might try bumping the soc a few milivolts and see if it still fails.
 
Memtest86 shifts the test pattern for each pass, so the more passes you run the better. AIDA64 may be stressing (other parts of) the SoC in addition to the memory controller, so if it's unstable that could cause a fail, too. You might try bumping the soc a few milivolts and see if it still fails.

Well I already went to 1.15V which is 0.05V over the max recommended for 24/7 use. I could try 2T command rate I guess, but that would just increase the latency even more compared to 3200 C14 1T. Anyway I'm guessing 3200 C14 is more or less just as fast as 3333 C16 because of the lower latency.
Just a bit surprised that 3200 C14 is so effortless while another 133 makes the system totally crap itself no matter how much voltage I apply.
 
Last edited:
Well I already went to 1.15V which is 0.05V over the max recommended for 24/7 use. I could try 2T command rate I guess, but that would just increase the latency even more compared to 3200 C14 1T. Anyway I'm guessing 3200 C14 is more or less just as fast as 3333 C16 because of the lower latency.
Just a bit surprised that 3200 C14 is so effortless while another 133 makes the system totally crap itself no matter how much voltage I apply.
Yeah,
3200c14 = ~8.75ns
3333c16 = ~9.60ns
Difference of 0.85ns. You might see a small improvement from 3333 in games (ryzen likes speed!), but that's only if the instability doesn't negatively affect performance.

I managed to get a really high cinebench score with 3.5GHz core and ram at ~3500 (close to my 3.7GHz score with ram at 3200), but my cpu wasn't stable with bclk that high and my ram won't easily run higher than 32 (was using 29.33 at the time).
 
ALRIGHT! Time for an update from me finally. Not been able to get 3466 to POST ever again, though my reply to Nobu at the end of this post may change that, as I wasn't aware that's what the setting was for.
Nevertheless, while 3333 was working just fine, I had been using all Auto settings. Since the only tighter timings reference I did come across was for 3200, I went about trying to see where I could come in terms of that site's "moderate" sub timings. Their's didn't quite work, so instead of just trying to nail down which one was causing instability, I loosened a couple just a tiny bit and that did the trick. Or at least enough so to keep Minecraft stable for ~10hrs of playing, where as prior to my adjustments it would crash randomly.

ANYWAYS, here are those settings (unless mentioned, all other settings are on Auto).
CPU P0 @ 38x
DDR Speed @ 3333
CPU NB Voltage 1.05V
CPU NB Power Duty Control "Current Balance" (favors amp output over thermal control)
DRAM Voltage 1.36V (1.376V actual)
DRAM Voltage(Training) 1.5V.
Below, an image of the Primary and SubTimings; everything else is on Auto except ProcODT which is at 48 Ohms.
(NOTE: I don't have A-XMP turned on, ignore that, as well as the DDR Speed, as I just used my 3200's timings)
attachment.php


Performance looks something like this:
HAD TO REMOVE BENCHMARK DUE TO WINDOWS RTC ISSUE INFLATING PERFORMANCE

Ran AIDA's Stress Test on Cache and System Memory, and it went exactly 30 minutes before failing. As such, I suspect that'll end up being "Game Stable". I just need to get FO4 on this system to actually find out :|

Just found the "training voltage" setting. It's CLDO_VDDP on the gigabyte gaming K5.
I'm not 100% sure that's correct. My Titanium has (hidden, but I changed that heh) a setting called DRAM Voltage(Training), as well as CLDO_VDDR. The "(Training)" voltage, at the higher RAM divisors, applies 1.4V or 1.5V, which is way more than what CLDO uses.

According to google searching, CLDO_VDDR is the "DDR PHY" voltage.
 
Last edited:
  • Like
Reactions: noko
like this
I'm not 100% sure that's correct. My Titanium has (hidden, but I changed that heh) a setting called DRAM Voltage(Training), as well as CLDO_VDDR. The "(Training)" voltage, at the higher RAM divisors, applies 1.4V or 1.5V, which is way more than what CLDO uses.

According to google searching, CLDO_VDDR is the "DDR PHY" voltage.
Ah, that'd make more sense. So, then, if I had access to the training setting you have, I could probably boot more reliably. I have got it to the point that it boots every second or third attempt now, and it's rock solid at 3200c18-18-18. Sleep functions as expected (it doesn't choke like during boot).

Thinking about returning this board and getting an asrock fatal1ty k4, though. Have until June 16th to decide...
 
ALRIGHT! Time for an update from me finally. Not been able to get 3466 to POST ever again, though my reply to Nobu at the end of this post may change that, as I wasn't aware that's what the setting was for.
Nevertheless, while 3333 was working just fine, I had been using all Auto settings. Since the only tighter timings reference I did come across was for 3200, I went about trying to see where I could come in terms of that site's "moderate" sub timings. Their's didn't quite work, so instead of just trying to nail down which one was causing instability, I loosened a couple just a tiny bit and that did the trick. Or at least enough so to keep Minecraft stable for ~10hrs of playing, where as prior to my adjustments it would crash randomly.

ANYWAYS, here are those settings (unless mentioned, all other settings are on Auto).
CPU P0 @ 38x
DDR Speed @ 3333
CPU NB Voltage 1.05V
CPU NB Power Duty Control "Current Balance" (favors amp output over thermal control)
DRAM Voltage 1.36V (1.376V actual)
DRAM Voltage(Training) 1.5V.
Below, an image of the Primary and SubTimings; everything else is on Auto except ProcODT which is at 48 Ohms.
(NOTE: I don't have A-XMP turned on, ignore that, as well as the DDR Speed, as I just used my 3200's timings)
attachment.jpg


Performance looks something like this:
View attachment 26883

Ran AIDA's Stress Test on Cache and System Memory, and it went exactly 30 minutes before failing. As such, I suspect that'll end up being "Game Stable". I just need to get FO4 on this system to actually find out :|


I'm not 100% sure that's correct. My Titanium has (hidden, but I changed that heh) a setting called DRAM Voltage(Training), as well as CLDO_VDDR. The "(Training)" voltage, at the higher RAM divisors, applies 1.4V or 1.5V, which is way more than what CLDO uses.

According to google searching, CLDO_VDDR is the "DDR PHY" voltage.
Those are some rather good numbers on Aida64 Mem and Cache bench!

For those who want to play around with Sub-Timings, The Stilt said this which I've found to be very useful:

Originally Posted by The Stilt
go_quote.gif


tRC, tWR, tRDRDSCL, tWRWRSCL and tRFC are basically the only critical subtimings (for the time being).

Setting the SCL values to 2 CLKs basically makes no difference to the stability, but results in a nice performance boost.
Minimum tRC, tWR and tRFC depend on ICs and their quality.

tCWL adjustment is broken in AGESA 1.0.0.6 beta, but it makes pretty much no difference either.

http://www.overclock.net/t/1624603/rog-crosshair-vi-overclocking-thread/18350#post_26138948

Setting the two SCL values to 2 made a good difference in Copy speeds and overall ram speed, first image is with Auto tRDRDSCL and tWRWRSCL which were at 6 each, and second is with those set to 2:

cachemem.png cachememSCL2.png
tRC going from 75 to 65 helped as well, tCWL auto was 24 but was not stable going any lower so left that alone. Anyways found that to be useful information.

Current mem settings and bench with stable ram:

CurrentMemSettings.png cachememSCL2_TRC65.png
 
Raise your hand if it blew your mind when you realized there is a "Nobu" and a "noko" on the forum, but you had been thinking they were the same person and wondering why their name kept changing back and forth between having a capital and lower-case "N"
:oops: *raises hand* :shame:

Ah, that'd make more sense. So, then, if I had access to the training setting you have, I could probably boot more reliably. I have got it to the point that it boots every second or third attempt now, and it's rock solid at 3200c18-18-18. Sleep functions as expected (it doesn't choke like during boot).

Thinking about returning this board and getting an asrock fatal1ty k4, though. Have until June 16th to decide...
What board do you have?
I don't know is ASRock is like ASUS with being able to flash modded BIOSes, which on ASUS apparently it's either not possible, or requires a few steps.
Gigabyte as of late, seems to have added CRC checks and thus hadn't been possible. However, a user who goes by Ket on a few forums, I saw he managed to mod his Gigabyte K7 BIOS and flash it (I'm not sure if he had manually fixed the CRC hash or not) so it may be possible.
MSI I can vouch that they work, as I've been modding all mine.
BIOSTAR I dunno, but assume they don't care either

That being said, depending on what you have now, if you want I could look to see if there's a Training setting that's there but hidden, then mod it so you have access.



Those are some rather good numbers on Aida64 Mem and Cache bench!

For those who want to play around with Sub-Timings, The Stilt said this which I've found to be very useful:
The_Stilt said:
tRC, tWR, tRDRDSCL, tWRWRSCL and tRFC are basically the only critical subtimings (for the time being).

Setting the SCL values to 2 CLKs basically makes no difference to the stability, but results in a nice performance boost.
Minimum tRC, tWR and tRFC depend on ICs and their quality.

tCWL adjustment is broken in AGESA 1.0.0.6 beta, but it makes pretty much no difference either.

Setting the two SCL values to 2 made a good difference in Copy speeds and overall ram speed, first image is with Auto tRDRDSCL and tWRWRSCL which were at 6 each, and second is with those set to 2:
tRC going from 75 to 65 helped as well, tCWL auto was 24 but was not stable going any lower so left that alone. Anyways found that to be useful information.
I haven't paid close attention to what other's have been getting on their AIDA CachMem, but I'm quite surprised to see I'm beating yours, given you're running just as much RAM as I am, low timings at 1T, but faster everything heh Have you tried running 16-16-16-36 just to see if that 14 is dragging things down by being a bit too quick (causing some sort of stall)? And to address the last sentence while I'm at it... Man that tCWL is way higher than I'd have expected O_O On my Titanium it is always 1 or 2 T below tCL. For example, if everything is on Auto, it'll set primary timings to 16-15-15-36, and tCWL will be 14. Though I just realized I had set it to 14 and so is in parity with tCL. I know he says is 'broken', but I think we can both agree that there's an impact on POSTing from changing it. I know it has been for me. However, I can't say for certain that MSI has added the true 1.0.0.6 AGESA or if it's a cobbled one based off 1.0.0.4a. Software that reads the string says .4a, but the hex value is 8001127 (or 24? regardless....) and that's a fair bit higher than prior. Which it wouldn't be too surprising that, at least for MSI, the string doesn't get updated even if the AGESA had been. heh

Those recommendations by him are very intriguing though! Thank you :D I'll definitely be giving them a shot right now and seeing what happens.

Now.... all we need is for someone to resurrect A64Info to work for, at the very least, Ryzen!! (Though, I'd love for things to support Carrizo, too, since my laptop runs it and needs some serious memory attention, since HP nerfed it so horribly)

Also thanks for that screenshot of RTC! I hadn't known about it so it'll make sharing stuff easier, as I won't need to enter the BIOS and take a screenshot every time LOL If only Stilt would release ALL the tools he makes :cry:
 
Formula.350 I have a gigabyte x370 k5. Its crc protected, but (after a lot of searching) I found the tools I needed to mod/flash the bios (sort of--a lot of the features on the tools didn't work or were disabled, depending on the version I got).

I was just trying to replace the agesa, so I never looked at any of the hidden features, and then they finally released a beta with the new agesa, so I don't know if the modded dos flashing utility I downloaded works or not.
 
Formula.350 I have a gigabyte x370 k5. Its crc protected, but (after a lot of searching) I found the tools I needed to mod/flash the bios (sort of--a lot of the features on the tools didn't work or were disabled, depending on the version I got).

I was just trying to replace the agesa, so I never looked at any of the hidden features, and then they finally released a beta with the new agesa, so I don't know if the modded dos flashing utility I downloaded works or not.
Yea, damn Gigabyte :( They started doing that a number of years ago, as far back as my Llano FM1 system. Prior to that I had an 890GX which I was able to mod quite easily (both are Award), but while the same tool opened the FM1 board's BIOS and could mod it just fine, the BIOS wouldn't flash due to CRC Mismatch :\ It was a huge bummer. And I don't think there was an "Engineering Version" of the flasher, like with the AMI, so I don't think I can force it to flash. Oh well, I made due lol

Well from what I read regarding nabbing the AGESA from one and adding it to another, is that it's not as easy as it might seem, particularly on AMI systems (definitely not as easy as with Award based systems, or as with how the Intel Microcode is packaged). This post I read, while it was referring to Bulldozer systems, it looks like the AGESA is still packaged the same, spanning multiple modules. It was said that these BIOSes aren't exactly compiled separately as it might look, with the modules being individual. Granted, they are still modules, but that the BIOS as a whole is coded around them in a sense. So pulling them out and mix-matching won't yield the results we'd anticipate. Even still, I can't imagine that the memory timing or other features would become available unless you were able to hex edit them in. Which hex editing seemed to be the most successful way of modding in AGESA changes based on their findings. I had the exact same idea you had... lol


EDIT: Just finished testing The Stilt's "-SCL @ 2" tweak, and... well I also changed tCWL to 12 (it was set to 13 but wouldn't apply, so ran at 14)... I have a feeling one of them has impacted my L3 performance. >_>
Memory speed has increased nicely, and the latency dropped 2ns :pompous:
HAD TO REMOVE BENCHMARK DUE TO WINDOWS RTC ISSUE INFLATING PERFORMANCE
 
Last edited:
Conspiracy theory time! lol

Found an old AIDA on my computer, one I had initially been using which lacked the ability to recognize Ryzen beyond a 16 Core processor at its clockspeed. It didn't even know it was HT, just thought it was all native cores. The L2 and L3 cache speeds were also horribly inaccurate, but there is one curiosity of the older benchmark code that I think is worth bringing up...

Here are my Read/Write/Copy scores when run by a recent version (I don't have the latest):
AIDA MemBench-Newer.png


And when ran on the older version...
AIDA MemBench-Older.png


Sure it's a give and take... Take from the older result's Read, give to the newer result's Write, but still. You expect results as non-complex as memory speeds to only increase. IMO at least. :p
[/conspiracy]
 
I think this would be a good reference as I'm reading stuff all over the map, so post your config and ram speed so we can see what the reality is.

After some experiments, the best I can get out of my corsair vengeance lpx 16gb (2x8gb) 3000mhz cas15 1.35v kit is:

Ryzen 1700
Asus b350m-a
Bios 0502
2400mhz, cas15, 1.35v
edit: memory sticks are in A2/B2 slots (so channel 2) per the Asus manual recommendation for 2 sticks. I haven't tried A1/B1.

The kit name is
CMK16GX4M2B3000C15B

I couldn't get it to post over 2133 with the bios it shipped with. Interestingly, the original bios had a ton more ram parameters to tweak, but didn't help. I tried up to cas18 and 1.375v but it wouldn't post over 2400mhz. According to asus site, this kit is double sided and rated at 2133 on their board, so at least I got it a little higher.

Edit 4/17:

Bios 0604
2666mhz, cas16, 1.35v bios is set to cas15 but cpu-z is showing 16, not sure why.
Getting there.


I have msi x370 gaming carbon pro
ryzen 1800X
corsair cmk32gx4m2b3000c15

running memory at 2667 with 16-17-17-35 @ 1T timing

If anyone has the same mobo and same memory kit at 2993+ mhz
please can you share your timing with me,
even if its a different mobo - I would like to try the timings

JD
 
Conspiracy theory time! lol

Found an old AIDA on my computer, one I had initially been using which lacked the ability to recognize Ryzen beyond a 16 Core processor at its clockspeed. It didn't even know it was HT, just thought it was all native cores. The L2 and L3 cache speeds were also horribly inaccurate, but there is one curiosity of the older benchmark code that I think is worth bringing up...

Here are my Read/Write/Copy scores when run by a recent version (I don't have the latest):
View attachment 26972

And when ran on the older version...
View attachment 26979

Sure it's a give and take... Take from the older result's Read, give to the newer result's Write, but still. You expect results as non-complex as memory speeds to only increase. IMO at least. :p
[/conspiracy]
Well, besides code and algorithm changes, compiler changes and build system can make a difference. Pretty sure it says somewhere (either on the website or in the program) that results aren't comparable between versions.
 
Just used the memory feature in OC in the BIOS . There is a little line that says "Try it!" That loads the XMP profile for your ram. Use even number timings for Ryzen.
 
Well, besides code and algorithm changes, compiler changes and build system can make a difference. Pretty sure it says somewhere (either on the website or in the program) that results aren't comparable between versions.
Right right, and that's totally understandable. I made sure to draw comparisons between the two as well, I just didn't want to bloat the screen with them.
All the memory benchmarks from each version are 100% identical. Which means if they did change the code and it results in changed, they'd have to run the tests on all their systems again in order for the comparison results to be accurate. As such that's exactly what happens in one of the benchmarks (I think it was AES), and they are all different between the two as expected. That's why I found this all so peculiar in the first place. :p
 
upload_2017-6-6_20-44-47.png


upload_2017-6-6_20-46-59.png


havent even played hard on it yet just changed the divider to 3333...loving this ax370 k3 board
 
Yea, damn Gigabyte :( They started doing that a number of years ago, as far back as my Llano FM1 system. Prior to that I had an 890GX which I was able to mod quite easily (both are Award), but while the same tool opened the FM1 board's BIOS and could mod it just fine, the BIOS wouldn't flash due to CRC Mismatch :\ It was a huge bummer. And I don't think there was an "Engineering Version" of the flasher, like with the AMI, so I don't think I can force it to flash. Oh well, I made due lol

Well from what I read regarding nabbing the AGESA from one and adding it to another, is that it's not as easy as it might seem, particularly on AMI systems (definitely not as easy as with Award based systems, or as with how the Intel Microcode is packaged). This post I read, while it was referring to Bulldozer systems, it looks like the AGESA is still packaged the same, spanning multiple modules. It was said that these BIOSes aren't exactly compiled separately as it might look, with the modules being individual. Granted, they are still modules, but that the BIOS as a whole is coded around them in a sense. So pulling them out and mix-matching won't yield the results we'd anticipate. Even still, I can't imagine that the memory timing or other features would become available unless you were able to hex edit them in. Which hex editing seemed to be the most successful way of modding in AGESA changes based on their findings. I had the exact same idea you had... lol


EDIT: Just finished testing The Stilt's "-SCL @ 2" tweak, and... well I also changed tCWL to 12 (it was set to 13 but wouldn't apply, so ran at 14)... I have a feeling one of them has impacted my L3 performance. >_>
Memory speed has increased nicely, and the latency dropped 2ns :pompous:
View attachment 26960
(y)(y)(y)
Never got those numbers, I tried 3466 and some BCLK and looks like I corrupted windows. My memory hole is right around the 3333mhz so that strap will not work unless I adjust the CPU_SOC (not sure exact name) which for CH6 defaults to 950mv, it is just a bare setting it (having to turn off the machine, if memory fails it goes back to default etc.). Anyways thanks for sharing, Aida64 if accurate, would be showing MSI is kicking ASUS's ass in memory speeds. Well at least mine but I can't recall anyone else getting those types of results from ASUS.
 
(y)(y)(y)
Never got those numbers, I tried 3466 and some BCLK and looks like I corrupted windows. My memory hole is right around the 3333mhz so that strap will not work unless I adjust the CPU_SOC (not sure exact name) which for CH6 defaults to 950mv, it is just a bare setting it (having to turn off the machine, if memory fails it goes back to default etc.). Anyways thanks for sharing, Aida64 if accurate, would be showing MSI is kicking ASUS's ass in memory speeds. Well at least mine but I can't recall anyone else getting those types of results from ASUS.
I'm 98.738% sure that CPU_SoC is the alternate name for CPU NB, as that's the correct voltage for it by default (actually I believe 0.900V is AMD's default). To be fair, I don't even know if I am needing mine set to 1.05V like I have it. It's just that previously when trying out speeds and timings, I seemed to have needed a smidge more to get it to POST. My using 1.05V was not really based on anything but a gut reaction to the Titanium (due to AGESA .0.6?) applying a voltage of 1.15V when on Auto. I had been just fine with .900V at 3200 so I couldn't imagine why it'd need to be higher, so I just picked a voltage in the middle of those lol

THAT being said, yes, my results are curiously higher than most people's that I'm coming across. I may have to go about updating it to see if it's a quirk of my version or what... Then again, looking at your screenshots, we do infact have the same BenchDLL of v4.3.741. Regardless, the other thing I find very curious is that you're running 3.9GHz, I'm running 3.8GHz, and yet my L1 speeds are quite a lot faster than yours as well; I broke 1TB/s, and it's reliably that result between runs and reboots.

IF you feel up for it, I'm curious if it at all has to do with both my approach for running 3.8GHz, or something to do with the 38x multiplier. One thing I've seen mentioned once or twice on other sites, is that W10 doesn't like the 1/4 multipliers. Whether or not that's true, or if that's somehow at play here, no idea. I presume that running at 36x isn't stable for you? If it's stable-enough to get a run of benchmarks in, maybe see if that has anythi........ you know what..... o_0 This just smacked me out of no-where... Ok so the "my approach for 3.8GHz" is that I use the K17TK program (I had made a thread about it, prolly on page 2 of this subforum), as I've not taken to BIOS clock adjustment yet. That being said, my early testing was at 36.25 and 36.5 multi. There have been a couple times where I've adjust it to that and suddenly my scores have plummeted for AIDA. As in my CPU Queen for example, lower than even my default runs. I had been chalking it up to me screwing around with other results, but now I'm wondering if that's not it at all and that Ryzen has some sort of hiccup with them? So we know that the caches run full speed, but what if they are somehow unable to run at the same 1/4 multi steps that the CPU can and it's causing cache misses? I'm going to check right now and see what happens if I use 37.75x instead of 38x. After that, I'll see what setting 38x in the BIOS does, instead of using the utility from Windows.


[Side Note: Computer failed to wake up from sleep with the SCL set to 2, but unfortunately I had changed tCWL to 12 (from 14) as well, so it could be that. I'll put it back where I had it and leave SCL where it is. Was having restart issues, too... Figured what the hell, dropped the Training voltage from 1.5V to 1.45V and PRESTO! Booted no problem. I might have to try that for 3466 as well... Speaking of which, why aren't you using 3466 instead of 3200+109BClk??]
 
I have a bug that if I unplug or unpower with outlet strip my memory defaults from the XMP profile. Will try this as I am going out.
Edit- It went back to 2133 upon returning with the power strip off. msi b350 TOMAHAWK, 1.62 BETA bios.
 
Last edited:
I'm 98.738% sure that CPU_SoC is the alternate name for CPU NB, as that's the correct voltage for it by default (actually I believe 0.900V is AMD's default). To be fair, I don't even know if I am needing mine set to 1.05V like I have it. It's just that previously when trying out speeds and timings, I seemed to have needed a smidge more to get it to POST. My using 1.05V was not really based on anything but a gut reaction to the Titanium (due to AGESA .0.6?) applying a voltage of 1.15V when on Auto. I had been just fine with .900V at 3200 so I couldn't imagine why it'd need to be higher, so I just picked a voltage in the middle of those lol

THAT being said, yes, my results are curiously higher than most people's that I'm coming across. I may have to go about updating it to see if it's a quirk of my version or what... Then again, looking at your screenshots, we do infact have the same BenchDLL of v4.3.741. Regardless, the other thing I find very curious is that you're running 3.9GHz, I'm running 3.8GHz, and yet my L1 speeds are quite a lot faster than yours as well; I broke 1TB/s, and it's reliably that result between runs and reboots.

IF you feel up for it, I'm curious if it at all has to do with both my approach for running 3.8GHz, or something to do with the 38x multiplier. One thing I've seen mentioned once or twice on other sites, is that W10 doesn't like the 1/4 multipliers. Whether or not that's true, or if that's somehow at play here, no idea. I presume that running at 36x isn't stable for you? If it's stable-enough to get a run of benchmarks in, maybe see if that has anythi........ you know what..... o_0 This just smacked me out of no-where... Ok so the "my approach for 3.8GHz" is that I use the K17TK program (I had made a thread about it, prolly on page 2 of this subforum), as I've not taken to BIOS clock adjustment yet. That being said, my early testing was at 36.25 and 36.5 multi. There have been a couple times where I've adjust it to that and suddenly my scores have plummeted for AIDA. As in my CPU Queen for example, lower than even my default runs. I had been chalking it up to me screwing around with other results, but now I'm wondering if that's not it at all and that Ryzen has some sort of hiccup with them? So we know that the caches run full speed, but what if they are somehow unable to run at the same 1/4 multi steps that the CPU can and it's causing cache misses? I'm going to check right now and see what happens if I use 37.75x instead of 38x. After that, I'll see what setting 38x in the BIOS does, instead of using the utility from Windows.


[Side Note: Computer failed to wake up from sleep with the SCL set to 2, but unfortunately I had changed tCWL to 12 (from 14) as well, so it could be that. I'll put it back where I had it and leave SCL where it is. Was having restart issues, too... Figured what the hell, dropped the Training voltage from 1.5V to 1.45V and PRESTO! Booted no problem. I might have to try that for 3466 as well... Speaking of which, why aren't you using 3466 instead of 3200+109BClk??]

Reason I am using the BCLK vice just 3466 is I get better results but more importantly it is stable. Messing around with CPU_SOC to move the memory hole to use 3333 was tedious to say the least, results were not good. 3466 results are similar to 3500, maybe a little bit slower but stability sucks, boot up issues etc. BCLK so far as given me a stable configuration. Now if I go above 110mhz on the BCLK my NVMe M.2 drive will not boot windows so I am limited to that. I wanted to use the 3333 strap and get to 3600 maybe but 3600 has been elusive to achieve and in the scheme of things is more just a number then anything meaningful.

I've done a clean install of Windows Creators Edition, fresh from Microsoft and going with what I've found as stable. Next is to install my custom water loop etc. and go for CPU speed which from 3.9ghz would probably not be much more and more wasting of time. But I will do that anyways :D.

I am using the most recent Aida64 Beta which gives more information on the bios and AGESA on the Mem and Cache tests.
 
Back
Top