Arc A770 Beats RTX 4090 In Early 8K AV1 Decoding Benchmarks

Because 8k AV1 is everywhere and so common place!

And by the time it really is (but it already is!) there won't be better GPUs (lower power, more efficient, better playback) GPUs!

Talk about try-hard benchmarking.....

How does the RTX compare to ARC in the "which GPU is manufactured by Intel" benchmark? Eager to see the results!
 
You know, I've been operating workstations for decades now and I don't even know what a workstation GPU load looks like.

It's mostly CPU stuff, at least in my applications. Any old video card that can output a 2D desktop usually does the trick.

Totally dependent on what the workstation is for: code compiling/3D rendering (CAD or modeling)/video editing/AI/Simulations?

Doesn't really refer to the beancounter Microsoft Access and Excel workstation nowadays/anymore.
 
These fail... nice they are good at something.

What it tells me is... IF Intel managed to figure out what was actually wrong with the 700s before having battlemage work complete. (I have a feeling their BM work was already done before they really had time to figure out what was not working with their arch) Perhaps their next cards are something Nvidia and AMD should actually worry about. Intel Gen 2 might actually be decent. If they don't cancel consumer cards, gen 3 might even be competition for real.
 
You know, I've been operating workstations for decades now and I don't even know what a workstation GPU load looks like.

It's mostly CPU stuff, at least in my applications. Any old video card that can output a 2D desktop usually does the trick.
I have lots of workstations that do CAD, Animation, various rendering tasks, somebody is always doing some sort of AV encoding on a few boxes. I have a class that is upscaling and restoring old VHS tapes or film reels from the 60s.
90% of those run Nvidia with a smattering of AMD here and there, but those boxes are problematic and destined for recycling. The Intel GPUs could be a contender for many of them.

I am waiting on info on the Arc Pro A series to see what my options there are. The Nvidia tax is getting heavy.

I don’t like AMDs workstation offerings, I always need to worry about updates breaking things on those boxes and they frequently do and qualifying updates is a PITA. The Nvidia ones we honestly have automatic updates enabled and it hasn’t been an issue. But I would like an option for any case where CUDA isn’t 100% required and if the Intel Pro A’s are a viable alternative then you better bet I’d put them in the running.
 
Totally dependent on what the workstation is for: code compiling/3D rendering (CAD or modeling)/video editing/AI/Simulations?

Doesn't really refer to the beancounter Microsoft Access and Excel workstation nowadays/anymore.
I have lots of workstations that do CAD, Animation, various rendering tasks, somebody is always doing some sort of AV encoding on a few boxes. I have a class that is upscaling and restoring old VHS tapes or film reels from the 60s.
90% of those run Nvidia with a smattering of AMD here and there, but those boxes are problematic and destined for recycling. The Intel GPUs could be a contender for many of them.

I am waiting on info on the Arc Pro A series to see what my options there are. The Nvidia tax is getting heavy.

I don’t like AMDs workstation offerings, I always need to worry about updates breaking things on those boxes and they frequently do and qualifying updates is a PITA. The Nvidia ones we honestly have automatic updates enabled and it hasn’t been an issue. But I would like an option for any case where CUDA isn’t 100% required and if the Intel Pro A’s are a viable alternative then you better bet I’d put them in the running.

Mine is mostly 2D stuff. The occasional CAD work, but I have never found Solidworks or Pro/E to be particularly hard on the GPU. At least not for my relatively simple and small sterile single use disposable medical product designs. I can do that stuff on a laptop integrated GPU.

I do occasionally encode video, but I've always done that on the CPU, and haven't really seen the benefit of moving to a GPU encode.

Maybe my understanding is antiquated at this point, but I always got the impression that while GPU encodes can be fast, they have fewer codec options, aren't compatible with all applications and can compromise on quality, and its better to just brute force it with the CPU anyway.

Handbrake doesn't even fully load up the 24C/48T Threadripper with most codecs and resolutions. Not quite sure what the bottleneck there is. Might just be inefficient code. At higher resolutions, I get more CPU load out of it, but not at lower resolutions.
 
Last edited:
The Intel cards seem pretty bad at Solidworks. Most gaming cards are at least decent in this one.

https://techgage.com/article/intel-arc-a750-a770-workstation-review/2/

For those who don't like to click, the A770 gets 41.2 fps in solidworks and a RTX 3060 gets 195.9. A 6600 XT gets 192.3 fps. Intel isn't even remotely close to having their act together on drivers.
I should probably clarify a bit. This was for the solidworks portion of the specviewperf 2020 test.
 
The Intel cards seem pretty bad at Solidworks. Most gaming cards are at least decent in this one.

https://techgage.com/article/intel-arc-a750-a770-workstation-review/2/

For those who don't like to click, the A770 gets 41.2 fps in solidworks and a RTX 3060 gets 195.9. A 6600 XT gets 192.3 fps. Intel isn't even remotely close to having their act together on drivers.
I should probably clarify a bit. This was for the solidworks portion of the specviewperf 2020 test.


I understand it is built into SpecViewPerf, but I have to wonder what kind of massive model they are using to have the GPU be a factor at all in Solidworks use. I have mostly used it on several year old work issued laptops with integrated intel GPU's and never had Solidworks feel slow.

I have never seen a machine that didn't run Solidworks well. Even really old borderline obsolete machines with no discrete video card to speak of. I'd even venture to argue that the video card is a complete non-factor when it comes to designing something in Solidworks.

Also, while there may be gaming benefits to going high FPS, that certainly is not the case with Solidworks.

You are not trying to 360 noscope your part design. At least I'm not. :p
 
Maybe my understanding is antiquated at this point, but I always got the impression that while GPU encodes can be fast, they have fewer codec options, aren't compatible with all applications and can compromise on quality, and its better to just brute force it with the CPU anyway.

Handbrake doesn't even fully load up the 24C/48T Threadripper with most codecs and resolutions. Not quite sure what the bottleneck there is. Might just be inefficient code. At higher resolutions, I get more CPU load out of it, but not at lower resolutions.

Yes but for 4k/10bit HEVC and newer, CPU encoding is still so demanding, GPU encoding is much much faster and good enough for most use cases (larger file size encoding via GPU vs CPU as well), asides from sticklers like us (I do 4K HEVC Handbrake encoding via my 5950x for 'archival quality' on my Plex Server - still ~12 hours per encode/UHD movie)

Depending on how 8K goes when it is actually common place and around - I'm considering switching to GPU encoding myself based on how long it will take + what power draw a CPU capable of doing it within a day would draw. We'll see.
 
I understand it is built into SpecViewPerf, but I have to wonder what kind of massive model they are using to have the GPU be a factor at all in Solidworks use. I have mostly used it on several year old work issued laptops with integrated intel GPU's and never had Solidworks feel slow.

I have never seen a machine that didn't run Solidworks well. Even really old borderline obsolete machines with no discrete video card to speak of. I'd even venture to argue that the video card is a complete non-factor when it comes to designing something in Solidworks.

Also, while there may be gaming benefits to going high FPS, that certainly is not the case with Solidworks.

You are not trying to 360 noscope your part design. At least I'm not. :p
I agree, the cpu is the most important thing to do with Solidworks performance. Of course performance does depend a bit on how people view parts or assemblies. Viewing in shaded only is the fastest. Shaded with edges may slow down gaming cards a lot. Hardware accelerated lines is only available in the Quadro drivers as an Nvidia example. Geforce does not have this feature in the drivers. I think AMD also does this but not to the degree that Nvidia does it. The Siemens NX portion of specviewperf shows this deliberate product segmentation the most. Geforce cards get maybe 20 fps or something small and the Quadro cards get 200 to 300 fps or more.
For some proof of this being a driver issue, after AMD released the Vega Frontier which had some professional features enabled, Nvidia released 385.12 drivers. These drivers enabled the quadro type features for the Titan XP cards. Solidworks performance jumped 70% and the Siemens NX performance jumped over 600%. Same card, just different drivers.
https://techgage.com/article/quick-...-performance-boosting-385-12-titan-xp-driver/
Also, this big difference between gaming and pro cards only seems to exist for the opengl cad programs like solidworks and Siemens NX. For example, Autodesk stuff I think is all directx and that side of things seems very different. There seems to be almost no difference between gaming and pro cards.
One more new and interesting thing, AMD just reworked the opengl part of their drivers and CAD performance really improved. Regular Radeon cards seem to fly in Solidworks and Siemens NX now. For Siemens NX at 1080p, a 6600XT went from 91 fps to 376. An RTX 3090 is at 28 fps. The older Titan RTX is at 497 fps. Product segmentation at it's finest. Nvidia makes you pay dearly for CAD performance.
https://techgage.com/article/specviewperf-2020-v3-linux-windows/
The specviewperf test probably does have some large models, but also they have some tests that are shaded only, some are shaded with edges, some with AO turned on, and maybe some that are wireframe only. They seem to really try to push graphics cards hard and it's not the most realistic test. It doesn't really represent what most people are probably doing.
 
Maybe my understanding is antiquated at this point,
You really should've probably stopped at this point and realized that workloads have changed since the 90s. There are an absolute ton of workloads that benefit from a GPU, and they don't need a gaming GPU. Even in Solidworks, a GPU can be a rather substantial increase for folks that do more than "simple and small" things. And Solidworks isn't the be all, end all of workstation loads either.
 
I understand it is built into SpecViewPerf, but I have to wonder what kind of massive model they are using to have the GPU be a factor at all in Solidworks use. I have mostly used it on several year old work issued laptops with integrated intel GPU's and never had Solidworks feel slow.

I have never seen a machine that didn't run Solidworks well. Even really old borderline obsolete machines with no discrete video card to speak of. I'd even venture to argue that the video card is a complete non-factor when it comes to designing something in Solidworks.

Also, while there may be gaming benefits to going high FPS, that certainly is not the case with Solidworks.

You are not trying to 360 noscope your part design. At least I'm not. :p
I've seen some cards choke on SW. Several years back had a guy running a 510NVS and it would become unusable zoomed in and trying to rotate models.
 
You really should've probably stopped at this point and realized that workloads have changed since the 90s. There are an absolute ton of workloads that benefit from a GPU, and they don't need a gaming GPU. Even in Solidworks, a GPU can be a rather substantial increase for folks that do more than "simple and small" things. And Solidworks isn't the be all, end all of workstation loads either.

1.) I was speaking specifically about video encodes in the section you quote.

2.) My understanding in that regard does not date back to the 90's, but probably more to the Sandy/Ivy Bridge era. This was the last time I really read up on hardware video encodes.
 
Has Intel enhanced that yellowish video playback? Their iGPUs are horrible in all, but specially in video decoding quality. I wonder whether that same lousy video decoding quality has passed on to Arc. IMHO, a card is only worth it when you can play smoothly and also enjoy vibrant and faithful colors, which seems Intel never cared much about.
 
Has Intel enhanced that yellowish video playback? Their iGPUs are horrible in all, but specially in video decoding quality. I wonder whether that same lousy video decoding quality has passed on to Arc. IMHO, a card is only worth it when you can play smoothly and also enjoy vibrant and faithful colors, which seems Intel never cared much about.
ive only seen that with incorrect monitor or windows color settings. playback is fine, if setup correctly.
 
ive only seen that with incorrect monitor or windows color settings. playback is fine, if setup correctly.
Yeah, playback is fine, but at a lousy color quality compared to AMD cards, specially on big screen TV's. I hope Arc does better than old Intel iGPUs in this matter.
 
Back
Top