GPU x264 Transcoder

Status
Not open for further replies.

xdivenx

[H]ard|Gawd
Joined
Mar 4, 2008
Messages
1,941
Hey guys, I need some help here finding some suitable software for my goal:

To transcode edited 1080i MPEG2 recorded TV files into 720p x264 .mkv files, using my 9800GTX. My 6000+ is going to be pretty painful to transcode with, so I was looking around at Badaboom, but It doesn't have the features I require. Are their any other viable options out there? Price is not an issue.
 
GPGPU support for x264 is months or even years away, if ever.
Since you probably didn't mean to have that answer, I'll fill you in on a little secret. ;)
x264 is a software encoder - its a piece of software. It encodes H.264 files.

At the moment, the only available H.264 encoders with GPGPU that I can think of are Badaboom and ATI's AVIVO encoder. I think Microsoft's Expression Encoder 2 only does VC-1, so that leaves some obscure CyberLink (?) product. Maybe Adobe has something up their sleeves.

Now that thats out of the way.. encoding those caps with x264 doesn't have to be slow at all. It really depends on your taste for quality. The better the quality, the slower your encoding is going to be. If you are willing to sacrifice some quality for speed, I'm sure myself or others will point you in the right direction.
 
Thanks. As far as quality, as these will be "released" into a "high definition community", so I am trying to maintain a decent amount. Whatever config or profile I can get to fit 40mins into 1.1GB I will use.

As far as doing it on the processor, right now I have autoMKV in mind for its beginner friendliness and decent amount of features. Is there another you would recommend?
 
Your best bet is to find someone already in that "high definition community" and ask them how they create such awesome encodes. Snowknight26 kinda 'skewled' me a few weeks ago with some framegrabs from damned fine quality encodes when compared to the original content, so... maybe he (or she, sorry) can point you in the right direction.

I know "some people" in several "high definition communities" <hint, hint> and they don't mess with the GPU-assisted stuff at all: they prefer to use dual core/quad core boxes, or clusters of 'em to do their encoding and just get it done far far faster. Perhaps someday we'll see a big distributed computing service designed to do video encoding. I know there's a service out there in the 3D rendering community that lets you join in, donating CPU cycles to help other people finish their 3D renders much faster. Wouldn't be a bad idea to design a distributed computing effort for x264 encoding.

But then, perhaps... ;)

I had high hopes for the GPU-assisted x264 encoding but, it's still pretty young in terms of the actual implementation. The sheer fact that people are finally thinking "Hey, wait a second, that GPU is designed to do pretty much one thing - crunch numbers, and do it ridiculously fast, even faster than the fastest CPUs on the planet which have to do all sorts of things besides crunch numbers, sooo... maybe we can find a way to use that number crunching awesomeness for <insert purpose here>..."

The Folding@Home stuff using GPUs for crunching is a big step... but it's just the beginning, it really is. You should be able to accelerate effectively anything that requires math done fast with the GPU, it's all in how it gets implemented.
 
Thanks. As far as quality, as these will be "released" into a "high definition community", so I am trying to maintain a decent amount. Whatever config or profile I can get to fit 40mins into 1.1GB I will use.
If that's the case, your best bet is to stick with software encoders, primarily x264. Most GPU assisted encoders are limited in a way such that the quality will never be on par with unrestricted software encoders.

As far as doing it on the processor, right now I have autoMKV in mind for its beginner friendliness and decent amount of features. Is there another you would recommend?
Hard to say. Most "experts" agree that using a GUI isn't going be the crème de la crème in terns of quality or speed (or even quality:speed!) . Command line is where it's at.

However, that doesn't mean there aren't good encoder GUIs/frontends. Some of the ones that come to mind for me are RipBot264, MeGUI, AutoMKV (oh hey look!), and probably Staxrip. You could look into the first two and see if they suit your needs, but like I said, the best results come from doing everything by hand.

Your best bet is to find someone already in that "high definition community" and ask them how they create such awesome encodes.
For all you know they could be right here. ;)

Perhaps someday we'll see a big distributed computing service designed to do video encoding.
Plenty of those around actually. x264farm is one that comes to mind first.

I had high hopes for the GPU-assisted x264 encoding but, it's still pretty young in terms of the actual implementation. The sheer fact that people are finally thinking "Hey, wait a second, that GPU is designed to do pretty much one thing - crunch numbers, and do it ridiculously fast, even faster than the fastest CPUs on the planet which have to do all sorts of things besides crunch numbers, sooo... maybe we can find a way to use that number crunching awesomeness for <insert purpose here>..."
Not so fast there. While the GPU may be fast at some things (floating point operations), the CPU still beats it in other areas, which, funny enough, happen to be the ones that matter in video encoding.
 
Not so fast there. While the GPU may be fast at some things (floating point operations), the CPU still beats it in other areas, which, funny enough, happen to be the ones that matter in video encoding.

Is video encoding really that much different on a mathematical level from video rendering?
 
Depends on the algorithm. Though I don't know much about video rendering, I'm positive that video rendering is mostly done using floating point precision, which is, agian, different from what software video encodes use (H.264 encoders at least).
 
Is video encoding really that much different on a mathematical level from video rendering?

Heh yes.
When decoding you know what methods are used and simply apply the transforms.
This takes a massive amount less CPU per frame.
When encoding many many decisions need to made to decide how many different algorithms and the type of algorithms + parameters to give the best quality encoding with least loss for the given bitrate/resolution.
Better quality is obtained by doing multi pass encodes too which are not possible realtime.

Thats a brief scratch of the surface.
 
Better quality is obtained by doing multi pass encodes too which are not possible realtime.

Given that the settings are the same, a one-pass encode that ends up the same size will be equally as good.
 
Given that the settings are the same, a one-pass encode that ends up the same size will be equally as good.

http://www.afterdawn.com/glossary/terms/multipass.cfm
Multi-pass encoding, also known as 2-pass or 3-pass encoding, is a technique for encoding video into another format using multiple passes to keep the best quality.

The video encoder analyzes the video many times from the beginning to the end before the actual encoding process. While scanning the file, the encoder writes information about the original video to its own log file and uses that log to determine the best possible way to fit the video within the bitrate limits user has set for the encoding process -- this is why multi-pass encoding is only used in VBR encoding (the CBR encoding doesn't offer any flexibility for the encoder to determine the bitrate for each frame).

The best way to understand why this is used is to think of a movie -- when there are shots that are totally, absolutely black, like scene changes, normal 1-pass CBR encoding uses the exact same amount of data to that part as it uses for complex action scene. But by using VBR and multi-pass, encoder "knows" that this piece is OK with lower bitrate and that bitrate can be then used for more complex scenes, thus creating better quality for those scenes that require more bitrate.
 
Where did I say CBR? VBR can be done in 1 pass.

Guess you've never heard of constant quality/constant rate factor mode.
 
Note that it mentions why the need for more than one pass, its not just VBR over CBR.
 
And I dismissed that by stating that constant quality/constant rate factor mode does just that but in only one pass.
 
Status
Not open for further replies.
Back
Top