Don't listen to the FUD in this thread.
Do the upgrade for the 3080, then re-evaluate. Even then your best bet for max gaming perf will be to wait until after Zen 3 is launched.
A 4790K v a 9900K could not be a 40fps to 100fps jump. I don't doubt someone experienced that, but I would guess...
I booted into safe mode to run a few of the benchmarks, and got "normal" results, so it must have been something with my windows install.
Blanked the drive and reinstalled and it's working well now. Somthing must have been running in the background.
Yes I have, the initial results from booting with defaults were also below expected values, I didn't try running the AIDA64 at stock, but cinebench r15 was giving me a singlethreaded score of like 189. Eveywhere I look says cinebench R15 should be in the 210-220 range even at stock speed.
So some of those are blanked out since I have a trial version (above image is my machine), and my mem latency and bandwidth seem really low compared to what this CPU is suppose to do. Performance is low too, I'm only getting 206 single threaded in R15. I've looked at the review over at guru3d...
TR uses a much larger package. TR3 will likely be able to support 4 chiplets.
It will be interesting to see if Zen 3x00 will support 2 chiplets for up to 16 cores. My gut says yes, but we'll have to wait and see.
News flash 9900K is an 8core. But AMD being withing 1% is crazy good news. Now the long wait for the actual product. Also the short time it will be available before Intel will have their 10nm desktop chips out.
Such a bummer it's not Q1/Q2 availability but more like early Q3. I doubt you'll...
What a bummer, 2080 performance at 2080 prices without the 2080 feature set.
Only people who would really be interested in such a card are the people who need the 16GB of ram.
People are angry because they are angry, who cares. Rage is the new normal, I couldn't care less.
The 2060 out performs the 1070ti for $100 less so there is that. The 2070/80/ti don't render a value, and if you were in the market for an upgrade you should've bought a 10x0 card when they were...
I've been burned by assuming the quoted specs are the real thing before. Hell I've been burned by believing the on stage demo fo the product is realistic (5ghz on air for haswell?).
But yes release dates, msrp, official clocks and core counts. Very exciting stuff.
AMD keynote is tomorrow at 9am, that's when we get the business.
While AMD's moments on top have been few and far between, the situation here could not be more ripe for them to pull ahead. If they deliver I may seriously be looking at my first AMD build since 2004 (Opteron 165 FTW).
Seems a little hard on nVidia. They are trying to push forward and sell something besides resolution and framerate. Ray tracing is the first real image quality improvement technology we've seen since shaders debuted. I know running Destiny 2 at 165fps locked on my 1440p monitor wouldn't do...
Sure but they'll be faster than my 1080ti by a really significant as opposed to this little bump in raster performance the 20x0 cards have. There will also be actual ray tracing games to play. I don't think this first round of games is enough content to really push me to spend money on these cards.
When you look at the core count v clock speed for the 20x0 v 10x0 these cards look like 10-30% bumps + the new RTX stuff.
I think I'll wait for the 30x0 gen cards with actual games in the wild I'm trying to run.
Anybody actually grab one of these?
Compared to other reviews, the [H] seemed to get better than expected results.
I think I'm ditching my h100i GTX. I'm 99% sure it just died and when I get the RMA I think I'm going to just craigslist it. IMHO it seems that water cooling is only worth it if...
OCCT would drive load temps up into the high 70, low 80s. This is dead stock though. 40c/85c would be fine if I had it really overclocked.
I've got both noctu NT-H1 and artic MX-4 at my disposal for compounds, they seem to perform similarly.
CPU is a 5820K, and I actually have 2 to test with and it seems to happen with both.
I'm running stock bios settings, only change is bumping the cpu fan to full speed, and turning raid on.
I'm getting a 40c+ idle temp even when the voltage drops to 0.725.
I've got a corsair H100i GTX and I've...
So first off the new patch doesn't enable piracy. The guy that wrote the revive patch decided the only way to get the revive hack to work was full DRM circumvention, so he did.
1) reVive still works, no change
2) Games still get cracked, not really news.
The market will sort out weather or...
I went 5820K which I managed to get for $280 and I picked up a mobo for right around $200.
Not bad considering I get 2 more cores, 8 more threads and giant L3 cache.
The X99 is pretty comparable to the Z170 platform actually.
DDR4 check.
PCIe 3.0 check
Bootable NVMe check
USB 3.1 check
The only realy difference is the setup of the PCIe lanes.
A 5830K or better provides the best PCIe setup, 40 PCIe 3.0 lanes straight to the CPU.
The 5820K only has...
It also has the 40 lanes of PCIe 3.0 hanging off the CPU.
So if you are planning on doing >2 cards for SLI, this will be the much better choice.
For standard 2-Card SLI, the 5820K should be plenty of PCIe lanes.
Uh that's for your pessimism I guess some of us are more [H] than others.
1.4v totally boots, is certainly not an instakill.
1.35v seems to be stable with stress test temps around 85c.
So what's the consensus on the Processor Integrated VR Faults and Processor Integrated VR Efficiency Mode?
I seem to get the idea that I should turn these off as both the bios and the XTU app say that it increases headroom, but what's the downside?
Yeah that's a bummer.
Using Cinebench as a reference new proc is 50% faster single threaded than my old i7-920@4.0ghz and 100% faster multithreaded.
I can live with that.
Not quite as good as my last upgade where I went from an opteron 180 @ 2.5ghz to the i7-920, but beggars can't be choosers.
I'm hoping since 4.5 was so easy (might be stable at less voltage too), that I get 4.6 with some tweaking.
Temps were mid 70s with 1.35v @ 4.6 until it locks up.
I've got a 5820K in a box on my desk and newegg is dropping off an asrock extreme4 and 16gb of gskill 3000 today.
I plan on going 1.3v and let the cards fall where they may. 4.4 I'll be a little dissapointed, 4.5 will be a nice hit. 4.6 is what I am realistically hoping for. If I get 4.7 or...
I keep seeing the initial reviews saying 1.4v volts is maybe too much.
Now that the chips been out for almost a year anybody torch one of these bad boys using 1.4v?
This absolutely makes sense.
4K is GPU limited.
To get 1080p-ish perofmance you might need 3 cards.
The only platform that does well with 3+ cards is x99.
But if you're going to max out at 2 cards, then either platform should work.
Not even a little bit.
DX12 is going to reduce CPU usage.
Perhaps DX12 games will soak up the extra CPU usage to make more draw calls, but all that is parallel to UHD/4K resolution (side note 4K = 4096x2160, UHD = 3840x2160).
So I think trying to push 4K is still going to be drastically...
Because he heard it somewhere too.
4K is your graphics card pure and simple.
8x/8x for SLI perfomance is indistuisable from 16x/16x SLI.
Only if you are doing 3+ video cards would you need X99 to get better performance from your GPUs for 4K.
http://www.intc.com/priceList.cfm
5820K - $389
6700K - $339
The 5820K is suppose to be $50 more than the 6700K, according to intel. Once it hits the channel there is not much Intel can do.