Valve Software's Source Engine Multi-Threaded Preview

qdemn7 said:
This is a serious question, not a flame. What's the big deal about encoding movies anyway?
:confused:

I think its an e-penis kinda thing. But seriously, I like how he says one to twenty hours, lol. I think most people are going for three or less hours. Twenty+ is crazy.
 
mjz_5 said:
it looks like valve got paid to do an intel commercial
Intel were the first ones to drop the hardware needed to demonstrate what valve has been working on. Were they to compare it to a non-existant AMD chip?
 
DragonNOA1 said:
I think its an e-penis kinda thing. But seriously, I like how he says one to twenty hours, lol. I think most people are going for three or less hours. Twenty+ is crazy.

There are hundreds of settings that determine video and audio quality which produce the longer/shorter encode times. If you want to encode a movie with full quality and etc. it takes a lot longer than creating a 600MB file.

I used dual CPU computers for years back in the day when I'd run quake dedicated server on 1 CPU and the game on the other. It didn't make up any improved performance aside from the boost of having a dedicated server on its own CPU rather than having my game's CPU do both

It looked to me as though all of Valve's benchmarks and etc. were just from the developers’ standpoint and didn't translate into anything real world. Maybe I missed something but if it CAN be implemented in real world then I can tell you it will absolutely improve gameplay and performance. In the same way the GPU benefits the overall game performance by running the dedicated visual processes, dedicated a CPU core to physics or some other sort of calculation would help tremendously.
 
DragonNOA1 said:
I think its an e-penis kinda thing. But seriously, I like how he says one to twenty hours, lol. I think most people are going for three or less hours. Twenty+ is crazy.

Boggle at how you arrived at this e-penis conclusion.

Before dual-core pc's it sure as hell didnt take 3 hrs or less to encode a movie unless you were one of those people who were going for shit quality so long as it was done asap. High quaility encodes take a lot longer than 3 hrs but I digress.

The 50% comment was a very generous estimate for todays average gamers rig. With the new competition that Intel has brought back into the cpu market both intel and amd are bringing dual-core cpu's to the consumer at extremely reasonable prices. With quad-core and even octa-core on the horizon I'm sure we will be seeing a lot more software being written to take advantage of multi-threading.

Take a look at physics for instance. SLI, Crossfire, and Ageia Physx's (sp) are starting to make it more and more feasible for the average person to plop an addon card into their gaming setup for purely physics rendering in games. The more economicaly feasible it becomes for the average consumer to pay for this addon feature the more and more software you will see taking advantage of it. But untill you get that strong base consumer market, game developers arent going to want to waste development time on writing code for it.

Remember 3d cards were thought of as optional at best when they first came out. Now look at them. People spend 500-1000 dollars on video cards just to play something like WoW.
 
We have had dual core CPu's and hyperthreaded cou's for a long tuime and games don't take advantage of it. The only difference I see between my 3.0Ghz P4c and my Pentium d 830 also 3ghz. Is that programs install faster and the system does not slow downt oa crawl when running antivirus software in the background. Games may install faster but I don't notice a difference in level load times or performance. The onyl performance upgrade i get is from my faster graphics card. I am suprised that we don;t have dual core GPU'ed graphics cards since they scale so well in parallel processing.

I can;t wait until software developers start taking advantage of multi-core systems. Give a couple threads to AI and a couple to physics and a few more to loading game levels and physics. But unless game developers actaully take the extra time needed to thread their games then it is a waste of money. getting an extreme edition CPU is a waste of money. For hald the price you get similiar performance. You get what 15% peroformance increase for double the money? Lame....
 
And not to stab myself int he foot the only curent applications that can see performance gains from multiple core CPU's is Video editing and encoding programs and the like. i don;t care about canned psuedo benchmarks I only care about real world tests. But a quad core CPU can make your home movies encode to mpeg 2 in half the time. Pretty nice. But I'm a gamer! if it does not make Battlefield 2 faster then it's crap! :D
 
J-Mag said:
I think you have nothing to worry about. Most people see multithreaded programing as something that will increase performance in games, which really is not the case beyond a few percent. Valve realizes this and is using extra cores for features, not speed. Hopefully we will be able to adjust level of interactivity with in-game settings depending on the cpu we have, just as we adjust level of detail according to our GPU power.

So if they are using the extra cores for additional features, I wonder what they will be adding to the older games ? It would be nice if the implementation eliminates/reduces the dips in framerates that seems common in the newer games. You see alot of games maintain a near steady 60fps when tested but have those quick nasty dips into the 20's, I'm not sure if it's a cpu or gpu issue though ?
 
The announcement by Valve that they're going to be supporting multi-threading in their current as well as future titles is definitely a classy move on their part. I also agree with some of the other posts about core utilization choice, as I sometimes rip CD's to mp3's while fragging a few in Far Cry or HL2, etc. The prospect of hosting and playing on different cores is also appealing, as I have a 4 PC LAN here at home and like to play against my sons or friends on occasion. Both are worthwhile alternatives but pose a new problem: how to properly detect and scale down performance in the game/app if 1 or more cores are dedicated to a different process? What will take the hit, AI? physics? What if I have a PPU or some physics are offloaded to spare GPU cycles as ATi or NVidia are claiming they can do? Will that mean I only have to take a small hit in AI? The recommended specs on the game packaging could get very convoluted ...
 
who cares if it takes more time to code in thier multithreaded games... maybe then game devs can start earning thier $40-60 per title...

kudos to valve for not only releasing a game over a year ago that rocks in itself, but still dumping more and more updates and code changes into it for free (cs:s and source engine) anybody who says valve and steam sucks are ungrateful

more episodes cost, but at least they aren't expensive, and plus, they just add content...
 
HighwayAssassins said:
This slide makes me sad for all of us who just upgraded to X2's recently. :(

I wish It had some single core benchies on this graph, you are really only getting half the picture.
I completely agree - I kept looking at these benchmarks thinking "WHERE are the single cores for comparison??!!11" I know you can see a huge jump from the dual core to quad core, but I would love to see a similarly clocked single core cpu in the mix.

Great article though. It's nice to see that multi-core is finally being utilized, or at least the ball is now rolling. I'm excited.
 
Ok, bottom line, will this help only those with newer multiple core CPU's, or will it help those with Hyperthreading virtual core CPUs, like my Prescott P4 3.0 ghz?
 
you might, HT just makes it easier to get info processed on the CPU. by itself, the netburst architecture is kind of slow on fully utilizing the CPU.
 
Comte said:
Ok, bottom line, will this help only those with newer multiple core CPU's, or will it help those with Hyperthreading virtual core CPUs, like my Prescott P4 3.0 ghz?

It should help. Oblivion shows about 10% gains with HT on Click
and from the looks of things Valves titles should have greater multithreading support than Oblivion.
 
harpoon said:
It should help. Oblivion shows about 10% gains with HT on Click
and from the looks of things Valves titles should have greater multithreading support than Oblivion.
Yea...I'm pretty sure my c2d stays at 50% CPU usage during oblivion....so it's not very/at all multithreaded....which makes me surprised Anandtech found gains with HT
 
I'm pretty sure it uses all of your CPU and only 50% when you minimize the game, leaving it paused and running in the background.

The game does not make that much use of multi-threading, but even 5 FPS helps a ton if it keeps you from dipping below 30.
 
Scyles said:
I'm pretty sure it uses all of your CPU and only 50% when you minimize the game, leaving it paused and running in the background.
I'll have to double check, but I'm pretty sure I stretched out the task manager real far (so it shows about 1-2 minutes worth of CPU usage), then minimised. The resulting graph showed only about 50% CPU usage iirc.
 
jebo_4jc said:
I'll have to double check, but I'm pretty sure I stretched out the task manager real far (so it shows about 1-2 minutes worth of CPU usage), then minimised. The resulting graph showed only about 50% CPU usage iirc.

Oblivion is definitely multithreaded, but it's only really noticeable when using CF/SLI setups, otherwise it's too GPU limited to show any significant advantage.
 
harpoon said:
Oblivion is definitely multithreaded, but it's only really noticeable when using CF/SLI setups, otherwise it's too GPU limited to show any significant advantage.

Yes, Oblivion is multithreaded, but after explaining the various approaches to threading in the article, I believe Oblivion utilized the "coarse threading" approach. Could explain why many only noticed a 20% improvement between a similarly clocked single and dual-core setup with that title.

In this case, Valve's decision to work much harder at the threading game will allow them to have a much more powerful engine than Oblivions developers did. This is important as besides your primary gam release, you hope to have other developers purchase your engine and use it for their efforts (Think Ubisoft's Dark Messiah of Might & Magic's use of Source, albiet a modified version). I suspect that the developers of Oblivion may go back and tweak things, or develop a version 2 of their current engine in order to compete in the long run.
 
harpoon said:
Anandtech has single core benchmarks:
http://www.anandtech.com/tradeshows/showdoc.aspx?i=2868&p=9

This is gonna hurt for single core owners!

Particle simulation
C2Q QX6700 @ stock: 86
A64 3200+ @ stock: 14

VRAD Map compilation (in minutes)
C2Q QX6700 @ stock: 2.53
A64 3200+ @ stock: 14.73

Wow! :eek:


Hurt, yes and no. Where it's a "no". I suspect that single core gamers are going to continue seeing the same results in gaming from Source that they currently see. This means that someone going back and running HL2: Episode one after the Source upgrade on a single core will see the same results in game performance they have now. Now someone with an X2 or better, running the same HL2: Episode one after the Source upgrade will notice a performance boost.

Where it's a "yes", is when new titles are released such as HL2: Episode Two that utilized the additional cores for new features that are not available, or perform poorly, for single core gamers.

Remember Valve is trying to release this upgrade before HL2: EP 2, so we'll be able to go back and bench older Source games to see the results in performance boosts. HL2: EP 2 may have a much different AI and physics result for single core gamers and this is where they'll "hurt".
 
HighTest said:
Hurt, yes and no. Where it's a "no". I suspect that single core gamers are going to continue seeing the same results in gaming from Source that they currently see. This means that someone going back and running HL2: Episode one after the Source upgrade on a single core will see the same results in game performance they have now. Now someone with an X2 or better, running the same HL2: Episode one after the Source upgrade will notice a performance boost.

Where it's a "yes", is when new titles are released such as HL2: Episode Two that utilized the additional cores for new features that are not available, or perform poorly, for single core gamers.

Remember Valve is trying to release this upgrade before HL2: EP 2, so we'll be able to go back and bench older Source games to see the results in performance boosts. HL2: EP 2 may have a much different AI and physics result for single core gamers and this is where they'll "hurt".

I'm sure Valve will release a 'single core' version as well so that the vast majority of the gaming community won't be left out in the cold. It just means they miss out on all the fancy new physics stuff... hopefully the AI code remains the same between the two otherwise you will have different gameplay characteristics between SC and DC configs, which isn't a good thing.
 
I wonder if the Ep2 delay has something to do with adding more data to the game b/c it now supports multiple cores/threads?
 
sounds cool. I madea good choice with the X2 a year ago, i'm not upgrading till computer processors show a 10x performance gain.

yup
 
Kyle_Bennet said:
However, hybrid threading is able to scale to the “N” amount of cores we talked about earlier, which will allow hybrid threading to not only take advantage of all four cores in the current Intel Core 2 Extreme Quad Core QX6700, but also in the 80-core CPU Intel announced for release in 2011, and everything in between.
Really? 80 cores less than 5 years from now?
 
Cintirich said:
Really? 80 cores less than 5 years from now?
80 cores is for kids. The real gamming platform of the future has infinicore technology
 
larkin said:
80 cores is for kids. The real gamming platform of the future has infinicore technology
That would be a non-deterministic CPU. Say goodbye to cryptography :D
 
WhoBeDaPlaya said:
That would be a non-deterministic CPU. Say goodbye to cryptography :D
Well an infinicore processor would only benefit IPC's (infinitely paralyzable computations). I dont believe that your SuprPI calculations could be made to run any faster on an inficore vs. a single core CPU. but yea, goodbye cryptography, brute force would own all.
 
harpoon said:
Oblivion is definitely multithreaded, but it's only really noticeable when using CF/SLI setups, otherwise it's too GPU limited to show any significant advantage.
That's probably very true

/kicks x1900gt
 
Shark said:
I encoded a movie, on one processor, in 2.25 hours just last night. You have to be doing something horribly wrong for it to take 20+

Sure, if you can settle for okay quality and use some 1 click program. Try doing 6+ passes using Cinecraft encoder.
 
qdemn7 said:
This is a serious question, not a flame. What's the big deal about encoding movies anyway?
:confused:


No, for me it is not some e-penis thing, many people here obtain movies, often in AVI format, i dont have a compatible DVD player - had, dided, didnt bother with a new one and too lazy to go tv out from my comp

Converting from AVi to mpeg - dvd format, takes some processing power and time.

yes, alot of people always mention "encoding video's" and probably never do it as they think it sound cool to make it seem like one is some sort of multimedia guru.

My issue is i dont want to settle for the 1 click programs which give subpar results and no options, i am the one who uses 4-5 diff programs each step of the way to get the best quality i can out of an decoded / encode.

So it is nice to be able to say i want to convert this avi file i got, but wait, now i want to play some CS:S. okay, no prob, set the affinity for each process to their own CPU and i dont worry about jacl after that.

it is also nice when your in photoshop working with 50mb TIFF files and CR2 files adding numerous filters and adjustements and not having to wait 5+ mins for a filter to processes the 3504 x 2336 image.

If multi core cpu's didnt exist, i would own a multi socket system, or 2+ seperate computers for things i do, sure i may not do them everyday, but when i do, i want the power and still be able to use my comp for other things while it is getting it's ass kicked by some program on one of it's cores.
 
DemonDiablo said:
This is the big kicker for me. One of the huge advantages of multi-core procs today is that you can play killer apps with little performance hit incomparision to a single core proc. But with the multi-core proc you can also encode a movie/song without noticing a huge loss in smoothness while playing games.

Multi-threaded games sounds like a good idea so long as I have the ability to dictate how many cores I want to allocate to the game and or how much cpu % for each core. If I cant have that ability then Im not going to be much of a happy camper. Sacrificing productivity for a few extra fps and some new sparkly objects doesnt sound like all that great of a tradeoff to me.

yeah, i mean who doesn't rip a movie, encode it, and play battlefield 2 all at the same time?

think practically, people.
 
Shark said:
I encoded a movie, on one processor, in 2.25 hours just last night. You have to be doing something horribly wrong for it to take 20+
Not really. Encoding DVDs to the H.264 codec brings back memories of encoding full-length movies into DivX back in the 500MHz Pentium 3 days...took almost two full days for a good-looking conversion.
qdemn7 said:
This is a serious question, not a flame. What's the big deal about encoding movies anyway?
:confused:
Encoding movies is a godsend for HTPC owners. I can take a DVD that is 7-9GB, and compress it to 2-3GB using H.264, preserving almost 100% of the original image quality. I have a 400GB hard drive in my HTPC; so I can either have ~60 uncompressed DVDs, or I can have 200 compressed DVDs. Tough choice. The only catch is I need a CPU that can compress all those movies in a reasonable time. Core 2 Quad provides just that solution, since most/all encoding apps are multithreaded for 4+ cores.
 
InorganicMatter said:
Not really. Encoding DVDs to the H.264 codec brings back memories of encoding full-length movies into DivX back in the 500MHz Pentium 3 days...took almost two full days for a good-looking conversion.

Encoding movies is a godsend for HTPC owners. I can take a DVD that is 7-9GB, and compress it to 2-3GB using H.264, preserving almost 100% of the original image quality. I have a 400GB hard drive in my HTPC; so I can either have ~60 uncompressed DVDs, or I can have 200 compressed DVDs. Tough choice. The only catch is I need a CPU that can compress all those movies in a reasonable time. Core 2 Quad provides just that solution, since most/all encoding apps are multithreaded for 4+ cores.

valid point, 4 cores will rock with encoding/decoding..

but playing movies from a computer never sound as good as on a standard dvd player.. but thats just my opinion .
 
mjz_5 said:
but playing movies from a computer never sound as good as on a standard dvd player.. but thats just my opinion .
S/PDIF is your friend. ;) I get the exact same audio quality on my HTPC as I do on my DVD player, since both go out digitally to my reciever.
 
Back
Top