AMD's game mode enables Legacy Compatibility toggle, it is in the tweaking tool, and they provide info on what it does.
However, no reviewer worth their salt when doing a fair and stock comparison would go and install a tweaking utility and tweak the CPUs to operate outside of spec for one...
Tech enthusiasts who follow this stuff for a long time have always been talking about all the shady bullshit that Intel and NVIDIA does.
Remember The Way It's Meant To Be Played? Yeah, that's NVIDIA forcing other GPUs to run code that they optimize with their own engineers at the studios. It's...
Don't believe NV PR for one second.
They first denied there was branding exclusivity, yet in this latest blog to defend themselves, they said their goal was to make brands clear for GeForce vs the others. Liars.
Now they say they scrapped the program? Really? Bullshit.
This is a public...
Anything beyond 1.4vcore on these Ryzens need water cooling. Typical air, wouldn't go above 1.4 to be safe.
This refresh is decent, well priced, great stock cooler, hard to argue on that. Though not a huge leap, waiting for 7nm stuff, 14nm is getting way old now.
It's obvious what is happening when you read the Computerbase article and the interview with Gigabyte. They asked why the gaming brand is no longer with the Radeon cards, and Gigabyte says "it's not focused for gaming/gamers"...
This is clear as day that NV's arm twisting has taken effect...
Ryzen+ is gonna be game changing, since Intel has given consumers cheaper 6c/12t with CL. It defeated the purpose of Ryzen being affordable high core count CPUs, and now Intel still have an edge in low thread workloads due to their clock speeds.
Ryzen+ is basically AMD being more competitive...
No shock here at all that NV is yet again, being anti-consumer.
GPP is for hardware vendors what GameWorks is to software studios.
Work with NV to implement their middleware binary DLLs into games, at the exclusion of working with other hardware vendors, and receive benefits such as...
No mention of the data logging that runs in the background for GFE? Telemetry of gameplay data & habits, sent straight to NVIDIA.
Under their privacy EULA clause, they are allowed to share all of the collected data with their partners.
All of those points you raise have been discussed elsewhere on the tech sphere way before they materialized.
I mean stuff like cut-down Vega, wtf do you think has been happening for the past 2 decades of GPU binning? LC Vega we saw ages ago in leak images, and after the Fury X, well, too easy...
Is this what it has come to?
A random troll posting made up BS, as his own source, and we are supposed to take it seriously?
This is a tech forum. You make bold claims without a source, you deserve to be BANNED.
The graphics quality is just stunning for the kind of performance you can get out of modest hardware, it's quite crazy how optimized idTech 6 engine is.
The use case is all 15W notebooks will have a lot to gain from RR than sticking with Intel + HD-whatever-#, because Zen on low power leverage it's perf/w advantage for higher boost clocks. Without the clock speed advantage than Intel has on desktop, and a single CCX design, RR is superior to...
Timeframe is important. This is a early 2018 product. There's no rush to market like RX Vega. HBM2 yields maturing as we speak. The Radeon chip supplied looks to be ~200mm2 so basically, a small chip, yields will be non-issue.
Both companies would not commit for this to be a low volume product...
I have to fully disagree with Semiaccurate here. Intel & AMD would not waste R&D budget on a low volume part.
He's always got an axe to grind vs Intel and so it has clouded his judgement. This solution is going to replace all of the dGPU notebook designs in 2018, with only the very boutique...
Remember, Ryzen didn't beat Intel either. 7700K still supreme.
The 1080Ti will still be supreme, but once Vega is released, gamers will have more choices instead of auto defaulting to the 1070 and 1080. Heck, they may even ponder about going with a 1440p Freesync monitor instead of a GTX +...
Any tech forum poster already knows this, that the market is still full of plebs on very old hardware that upgrade over time.
It's only when they are being purposefully misleading and hostile, is when they bring up such claims to belittle AMD bringing competition.
Let's say Vega 56 trade...
When it trades blows, it wins some and loses some. It's not a hard concept to grasp.
For example, if Vega is suddenly much faster in GTA V or Gears of Wars 4 vs 1070/1080, that would be a shock right there.
If it's that important and the gameplay experienced is superior with FS/GS, why is it reviewers never actually talk about that in the end. They always show FPS charts and say GPU X or Y is better because of factors like price, thermals, power, etc. No serious reviewer has ever placed an...
Doom was also showcased first by NVIDIA on their stage at the Pascal launch. It's Vulkan implementation was also first optimized for NVIDIA GPUs. Wasn't that long ago, don't say you guys forgot.
AMD optimizations came later.
This is not true.
You will be right if you say most GSync monitors are better than most FS monitors. It's because FS is an open standard, monitor makers can do whatever they want, so they get some shit FS hz range into it. GS is controlled by NV so the standards are higher.
But there are FS...
If you analyze the numbers, it's obvious Cinebench is not optimized for 2 socket configuration.
Compare the results for single Intel Xeons and then 2 socket config, the scaling is pretty awful. And that's on released part that's mature.
If you want to be negative, this is the more interesting...
The most impressive for me is 1 EPYC delivers 128 PCIe lanes and 8 Channel DDR4. That's just nuts for a 1S setup where enterprise have to pay fees based on socket counts. Intel neuters it's 1S forcing companies to pay more for 2S and on top of that, more software costs while their workloads do...
If you live in a cooler climate, the extra power draw of your PC is a benefit as it leads to heating. If your rig outputs considerable heat, you can even forgo turning on a heater, at which point, it's a pure win-win situation.
If you live in a warmer climate, it's the reverse. That extra heat...
How many games tested and what were they?
These summary can change significantly when sample sizes are small and games get switched in & out.
TPU's recent GPU reviews for example, has the Fury X ~ 980Ti (stock tho, obviously it OC better) at 1440p and faster at 4K.
Doesn't seem to be iGPU on the CPU.
But an MCM. The GPU seems to be 1536 SP w/ 4GB (HBM @ 800mhz) + Intel's HD 630 so the SP count is read as 1720.
For clarification, L2 on Intel HD 630 is 512kb shared, while L2 on each GCN SP is 16kb, it adds up to 528kb.
A 1536 SP GCN would be 24 CU. HD 630...
I think you are vindicated Kyle.
The engineering sample of this monstrosity has arrived on Sisoft and GFXbench. It's on the main page of videocardz.
Intel CPU + Radeon gfx9 (Vega architecture?)?.. some weird CU and 1720 SP counts (might not be reading right, or some rather strange...
This iMac Pro could have been $1K cheaper with 16c/32t Threadripper instead...
That's the REAL cost of Thunderbolt compatibility right there. $1,000. O_o
If you're talking about memory bandwidth efficiency, that's the Tile-based Rasterization technique which NV has used since Maxwell, keeping things on their L2 cache instead of wasting VRAM bandwidth..
So if Vega uses that, along with it's discard features, and the other L2/ROP change, it...
Why do you think that?
Weaker ROPs would indicate a performance drop off as resolution increases, relatively.
AMD GPUs have historically been the opposite. For example, since 7970, 290X etc, they suffer less performance drop off as resolution increases.
Fury X was a prime example of this...
Quality = ??
G-Sync = You pay a big premium for ?? (There's FS 30-144hz monitors of the same display panel as their Gsync counterpart since years now). This isn't actually an advantage. It's a major disadvantage as you pay more for what is essentially FREE because NV refuse to support industry...
Probably, such an APU only makes sense for boutique and mITX builds. Still, they could create an entire new niche with overpowered APUs with HBM2 freeing system ram bandwidth limitations.
Actually now that I think about it, it makes sense: notebooks, ultrabooks especially, and NUCs. Even Intel...
That would be nice, if Intel has eDram for their regular APUs. But Intel figured it was too expensive for them to keep doing such a design. Their eDram Crystal Well powered APU was ridiculous, an entire 2x die is devoted to eDram.
HBM2 2GB should serve well with Vega's HBCC for any APU. Keep...
This one is pretty simple. Vega APU should be much more efficient with bandwidth from 3 changes:
1. Better primitive discard/culling steps in their new geometry engine.
2. Binning Cache for the Rasterizer, less traffic to and from the APU to system RAM, more on-chip cache traffic.
3. ROPs a...
$999 would be low for Intel standards on such a CPU.
We're looking at possibility $1499 or so, and the 12 core part, $1999.. unless Intel has learnt their lesson and decides to price it reasonable.
You should look at it again.
P100 is HBM2, there's no GDDR + PCI-e. It's simply P100 + PCI-e (Xeon) or P100 + NVLink (IBM P8).
K40 cannot handle datasets bigger than it's VRAM limit, this is why it failed the workload at 28.9GB.
What NV is demonstrating here, is P100 can handle larger...
Why is a 58.6GB dataset presented by NVIDIA an unrealistic use case or equivalent of "power virus"? That's not much bigger than the GPU's VRAM, when we put it into context such as 512 TB virtual address space.
Your other example, doesn't list the actual dataset size. Because K40 is actually...
Are you pretending to be stupid? Vega can't run that CUDA test. Nor would it be required to demonstrate it's capability to easily handle multi-Terabyte datasets, which it's own demos show a major performance gain.
On the other hand, P100's demo, from NVIDIA themselves, show it tanking badly...