PBO doesn't really do alot for the 7900x. There no node stepping changes as far as I know. The 7900 is just set up from the factory to be efficient at stock with the ability to easily turn on PBO to have it perform on par with the 7900x. The 7900x could easily go the other way with an eco...
It would be much harder for Intel to cut power consumption in half since they are already leveraging the low frequency e cores.
The 12700 (non k) would likely need a 6+12 or 6+16 configuration instead of 8+8 with p core clocks reduced on top. This would likely take a significant performance...
Twice as efficient as a 7900x and 3x as efficient as a 13700k while still performing competitively.
The 7900 should easily dethrone the 5950x as the most efficient cpu at stock settings. I dont see Intel approaching anywhere near this even after they launch their non-k cpus.
Clockspeeds are higher, and its 64 bit but gdddr6 which uses more power.
It does use less power though, regardless of TDP.
https://www.techpowerup.com/review/gainward-geforce-gtx-1630-ghost/35.html
I didn't say they had to buy used. Had Nvidia just repackaged a 2060 as a 3050, it would have been a better card and still have been 'new'.
If we can get a repackaged an updated 3070 at xx60 prices, why not just release that instead of making the midrange wait for Nvidia to develop the...
That's a false equivalency, but I understand you are just trying to find an analogy.
If it's vehicles you want to use, it would be like Ford producing an 2018 F-150 Platinum with a 10" infotainment screen. Then in 2020, the Platinum gets a 12" screen while the Lariat owners are 'stuck' with...
Does it really make sense to even make a midrange, mainstream or budget card anymore? Just look at the last gen mainstream RTX 3050. The RTX 2060 midrange card from 2 generations ago is a better buy and likely cheaper to produce.
Would it make sense to design a budget 4030 when the 3050 will...
For the visual learners. Not only is Nvidia taking the lion's share, but discrete GPUs seem to be targeted for the elite only as all numbers are going down while these companies maintain their bottom line by making it up with targeting the high end.
Its easier and cheaper to build, package...
For the most part, USB-C is great. You youngsters don't know about the dark ages of mini-USB and micro-usb cables (still finding broken mini USB cables in my attic).
Still, I would much rather have USB-A on the charge side. Despite needing to flip it around 3 or 4 times to plug in...
If you search reddit and Twitter enough, you could find people saying the card is faster than the 4090, but if course, that's just noise, same as the 'only 30% faster than the 6900xt' crowd.
So why then bring it up? They are insignificant and people really only care about what is said on the...
Sheesh I almost forgot about Callisto Protocol.
"Hey guys, don't worry about the ungodly shader stuttering - these PC settings are WAY better than console!"
Yeah, yeah we get it - the console settings are not equivalent. It sounds like a broken record around here.
Fact of the matter is, someone playing on a $500 console then a $1500+ "gaming pc" will in fact have a better experience on the console.
It doesn't bode well for pc gaming is all...
Its only 37% over the 6950xt. Don't move the goalposts. Amd themselves said 50-70% over the 6950xt.
While it might get there eventually, nobody said just 30% over the 6900xt. If so, show links/screen shots.
Yeah because no tech dorks ever have found issues with AMD software as in no way that these flawless AMD electrical engineers have ever released any software with any problems.
AMD has really struggled figuring out how to get sufficient bandwidth with their products. Case in point was Hawaii, which was 512 bits - something we've never seen since. Fury and Vega were attempts to get around this issue as bus lanes are very pricey, but those cards had their own issues...
Right now the uplift of the 7900xtx over the 6950xt doesn't make a lot of sense just given the raw specs: twice the gflops and far higher bandwidth - 960 gb/s vs 576 gb/s.
While we dont know the specs of the 7800xt, its safe to assume that they should at least match the 6950xt. AMD should be...
Even with the "lack of spoiled milk" of Vega as fine wine triggers people, Pascal was still the better product. GCN 5 was a flop outdone only by Fury.
As the One X and PS4 pro had many years of game development ahead of them, AMD would have been better of refining GCN 4 until RDNA was ready...
Perhaps not from a purely gaming perspective, but the AMD cards seem to be more desirable years later for a myriad of reasons: 7970 vs 680, rx580 bs gtx 1060, even rx 5700 rtx 2060
The 7900xtx is 35-40% faster than the 6950xt depending on the review. If you goes balls put, you might get that +50%. Just hope electricity is cheap by you.
The 7900xtx vs the 4080 seems a lot like the HD7970 VS GTx 680. AMD likely will have some fine wine improvements. Also, for those...
Not having encode ability is sort of a deal breaker for me as far as the 6400 goes, but if you need an AMD Low profile card, your options are very limited.
You are probably ok with just 8 GB of Ram for what you are doing.
Any money says the 4070 and 4070ti (both 504 GB/s) will be much closer together than the 4070ti to 4080 (717 GB/s), despite being a similar jump in gpu performance.
Bus lanes are expensive. Likely the reason why the 4080 uses extreme memory speeds, which are even faster than used on the 4090...
I think it would be if it wasn't for the large cache that the 40 series uses. Much like the RDNA 2/3. It also uses rather fast memory so overall bandwidth is close to the 3080.
Large Cache only goes so far. Recent example if this is the 6950xt performance. Gpu power is just 3% more than the...
Close on the 4060ti. The 4070 will have 35% more gpu compute and a whopping 75% more bandwidth, which is strange.
https://www.tomshardware.com/news/preliminary-geforce-rtx-4060-ti-specs-leak
Not sure why the 4060ti doesn't at least get gddr6x as bandwidth only matches a 1660ti.
The 7900xtx should be the 7900xt at $1100 and the 7900xt should be the 7800xt at $800. The product stack would have been better accepted while maintaining there bottom line.
TPU was closer to 40% but yeah. It is right at 50% faster than the 6900xt (and I know the slides said 50% faster than the 6950xt) so maybe we could see it hit that mark with some driver improvement. There are a few games where the 6950xt is just too close to make any sense.
One caveat for the Portal RTX review from TPU - their system was using just 16 GB of ram.
We have seen a review from Hardware Unboxed back from 2018 where going from 16 GB to 32 GB of ram would give a small performance bump. Now that was in 4 year old games and with a card that already had 11...
Good find, I guess that marries up well then. The 40 series cards don't seem to have such a big delta in tensor tflops over the 30 series cards as the 30 series cards had over the 20 series.
Perhaps the 4070 won't get 40 fps in Frogger with God rays while the 3090ti owners are sputtering...
Honestly though, this game should not be used as a guage for upcoming vram requirements. This was simply an Nvidia jerk off session to show that their card is 100x faster than the current AMD competition.
The fact that the 2080ti can't even get 30 fps at 1440p with dlss says it all...
Yep just saw that. Holy smokes - full path tracing is expensive. I got an x value of 1.28 GB/mp and a y value of 5.5 GB. Thats insanely high. Render resolutions for balanced dlss at 1080p, 1440p, and 4k are .70 mp, 1.24 mp, and 2.79 mp respectively. Using 1.28x + 5.5, we get an estimated 7...
At least what we know for Nvidia, given the specs on the top tier models, mainstream 4000 series cards will likely be 128 bit or 8 GB, which is crazy considering how long the 580 has been around. The mainstream 4050 could be just 96 bit or 6 GB. Bandwidth demands of games has been rather high...
Continuation of this 4 year old thread...
https://www.techpowerup.com/review/the-callisto-protocol-benchmark-test-performance-analysis/5.html
Using the RTX 4090, Calculated 'x' value of Calisto Protocol is .277 GB/Megapixel and 'y' value is 4.67 GB.
Adding RT bumps this to x=.340 GB/MP and y=...