Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
would concur on this because next gen console is 8 cores/16 threads10700k is best bang for buck, with 10900k being best no holes bared option.
BTW, have you seen the latest DDC fitting cpu block/pump/res setups from barrow? Perfect for ITX/SFF
https://m.aliexpress.com/item/10000...tedetail&spm=a2g0s.9042311.0.0.19ce4c4dF6G4ed
Ok, going for 10900k unless AMD has a better option.?10700k for bang for buck.
10900k for all out.
I would say wait if you can, because pcie4 is a thing and comet lake doesn't have itOk, going for 10900k unless AMD has a better option.?
Ok thank you and I will. Any idea how long of a waitI would say wait if you can, because pcie4 is a thing and comet lake doesn't have it
2080ti already maxes out pcie3x16
Ok, going for 10900k unless AMD has a better option.?
But AMD supports pcie4 if I am not mistaken and Intel 10th gen cpus do not. I could be wrong here but this could matter greatly.AMD does not currently have a better option if gaming is all you care about.
2080ti already maxes out pcie3x16
But AMD supports pcie4 if I am not mistaken and Intel 10th gen cpus do not. I could be wrong here but this could matter greatly.
https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.htmlplease back up this statement with figures.
with pcie4 the 3070 (2080ti level if marketing is to be believed) might be able to leverage more bandwidthyes but you are saying it is saturating 3.0x16, how can you know that when it is only capable of 3.0x16?
As for the performance impact from PCIe 4.0, we’re not expecting much of a difference at this time, as there’s been very little evidence that Turing cards have been limited by PCIe 3.0 speeds – even PCIe 3.0 x8 has proven to be sufficient in most cases. Ampere’s higher performance will undoubtedly drive up the need for more bandwidth, but not by much. Which is likely why even NVIDIA isn’t promoting PCIe 4.0 support terribly hard (though being second to AMD here could very well be a factor).
I’m going to differ from other opinions here:
get a 3700x
Get a gigabyte aorus ultra 570x
Get 32 gig of ram in two sticks
2 of those sabrient 1-2tb drives (nvme)
A fractal design define r6
And search the used market for a 2080ti
The OP asked what the best gaming CPU is. He said nothing about value. He also asked about 4K and 8K gaming. The 10900K is his best option. It's not the cheapest, but it is the best.
Also for 4K or higher, the 2080 Ti isn't going to be the best option. At that resolution, the 2080 Ti can barely do 60FPS in some games. That means 8K is a no go unless you like running games in potato mode.
The 3080 or 3090 will be better. We just aren't sure by how much yet.
Haven't checked CPU performance for some time now, but damn AMD's best isn't even an i5 in gaming anymore. Just going by the crazy hype alone, you would think that AMD is the one on top lol!
Wait for Zen3.
I have decided to wait for Zen3 along with a revision to the SFF cases to accomodate the 3090. I am hoping for an n1 rev7Like others said already, were close enough now to Zen 3 that it would be senseless to go all out on what's available today when soon we will have faster options.
Something to keep in mind between Intel and AMD is USB endpoint limitations if you use quite a few USB ports, especially if you VR. Unless Intel has changed something, their USB 3.x controller are limited to a maximum number of 96 endpoints per controller. AMD's USB 3.x is limited to a maximum number of 254 endpoints per controller. This is one of the reasons I switched to AMD for my current build project (among a few other things). With up to 32 endpoints being consumed per device, those resources can be eaten up pretty quickly. Just something to keep in mind that probably most people don't even consider when building out a new machine.
My current Intel machine, a 6700K, I frequently run into USB resource errors ever since I bought my Valve Index. Hubs don't help, they actually will make it worse since they will at minimum consume 1 endpoint per port on the hub whether it is used or not. Even if you are using ports just for RGB or charging, they will eat up some of those endpoints. One of the workarounds for this is to disable XHCI but then you drop your USB ports down to 2.0 by doing this. Another option is to invest in an additional USB controller, but that doesn't work if you are doing a SFF or compact build like I tend to do with my builds.
I would like to think that I don't have too many devices, and have even unplugged some devices to free up resources:
- Razer BlackWidow Elite (2 USB ports)
- Razer Basilisk wireless mouse/dock (1 USB port)
- Razer Firefly V2 mousepad (1 USB port)
- XBox 360 controller (1 USB port)
- Epson V500 Photo Scanner (1 USB port)
- 7 port USB 3.0 hub, only using two ports to charge Valve Index controllers (1 USB port)
- Valve Index (1 USB port)
- APC UPS (1 USB port)
- USB Hard drive dock or camera card reader (1 USB port, only plugged in when using)
I have been trying to find a whitepaper on this, but all I have found through my searching is a couple of tech blogs that discuss USB endpoint usage with XHCI and some forum/reddit posts of users discussing the USB endpoint counts between Intel and AMD and the problems it causes. I can't find a direct link if current chipsets from Intel still exhibit the problem or not.I have had similar issues on my rig - however, I can't find any data supporting the Intel vs. AMD thing you mentioned on this front.