What happened to LucidLogix Virtu MVP and HyperFormance?

Joined
Mar 29, 2012
Messages
837
My old Z77 I just upgraded from used to have that technology, and I'm kind of curious what happened to it after the fact since I haven't heard anything about it since. Specifically technologies like HyperFormance and Virtual V-Sync. I put this in the "Video Cards" section, even though you could make the argument it's about software and virtualization to some extent (I won't mind if the thread gets moved), because the goal is getting more performance out of graphics hardware.

The way it worked was... you had to plug your monitor into the integrated GPU, turn on the technology, and it would let you use the integrated GPU and the dedicated one together to get slightly higher performance in games. I used to occasionally enable HyperFormance on games to get maybe another 5-10 FPS, so I could get smoother framerates. It really only worked well at 1080p, but it was pretty good for that. It didn't always work well, but when it did it was a neat trick to have.

Did that technology pretty much die off? If so, it's kind of a shame because I would really like to see what would happen if I used an older card like the GTX 560 Ti or 670 with a UHD 750 taking some of the load off it. More out of curiosity than because it's practical, but still. It seems like technology that shares the load between a dedicated GPU and an integrated one would be pretty useful right about now. Or really any technology that can load up a CPU and/or an iGPU to get more performance out of old or low-end graphics hardware paired with newer computer hardware.

It seems like nothing really happened there and everything is back to how it was before... basically, everyone accepts that you only get the performance your chosen GPU gives you, maybe whatever you can achieve from overclocking, and not many people attempt bizarre or quirky technical tricks like this to try and get more out of it.
 
It was total garbage in practice. I tried using it on my old motherboard and never found it acceptable. There was always a heavy trade off, or major compatibility issues. One of those pie in the sky ideas that sadly never took off because it needed too much industry support it was never going to get.
 
It was total garbage in practice. I tried using it on my old motherboard and never found it acceptable. There was always a heavy trade off, or major compatibility issues. One of those pie in the sky ideas that sadly never took off because it needed too much industry support it was never going to get.
Well, that's really too bad. It worked well for me in several games I happened to play a lot, but you're right, most games weren't quite compatible.

That said, I really wish there was a way I could use the integrated graphics on my Rocket Lake CPU alongside an older GPU and divide the work between them based on what each one is better at. I have a game or two where I'm getting like 40FPS with 25FPS lows and really only need a little extra to push me over the top. Moving the PhysX processing to the CPU helped a lot, as did forcing Global Illumination to a lower setting in a lot of games that used that heavily. Meanwhile, my integrated GPU can get 25FPS with 15FPS lows in the same game. But together they would likely be able to get close to 60FPS with the work divided properly.

On that note, it's kind of annoying that most texture packs for games are designed to increase the number of polygons in a model and make the textures higher resolution. I'd have really preferred the opposite... lower resolution textures and versions of models that try to save polygons. LOL
 
you kinda listed why its not around. niche product, low use/apps. also, we are trying to progress not regress graphics....
 
Dx12 superceded it. Even with a standard, nobody wants to put in the work to implement it because it is not worth the return.
 
Laptops seem to be doing this, though to what extent I cannot say. I used a laptop that would use a 3060 or 3060 ti when gaming while plugged in and it swapped to intel igpu to save power.
 
LucidLogix Virtu MVP was abandoned on the Desktop, and replaced with Nvidia Optimus in Laptops.

It was an interesting tech, but I think a big part of the issue was that Intel iGPU performance was just pretty terrible overall until very very recently - meaning that even in a best case scenario, if LucidLogix Virtu MVP was working perfectly, the performance gains would have been negligible. Thus, not worth the effort.

I'd really love to see a flexible solution at some point, encompassing multi-GPU in general, and designed from the ground up. Something where you could use two identical cards, two mismatched cards, a discreet card + iGPU, or anything really.
 
Laptops seem to be doing this, though to what extent I cannot say. I used a laptop that would use a 3060 or 3060 ti when gaming while plugged in and it swapped to intel igpu to save power.
not the same thing at all.
LucidLogix Virtu MVP was abandoned on the Desktop, and replaced with Nvidia Optimus in Laptops.

It was an interesting tech, but I think a big part of the issue was that Intel iGPU performance was just pretty terrible overall until very very recently - meaning that even in a best case scenario, if LucidLogix Virtu MVP was working perfectly, the performance gains would have been negligible. Thus, not worth the effort.

I'd really love to see a flexible solution at some point, encompassing multi-GPU in general, and designed from the ground up. Something where you could use two identical cards, two mismatched cards, a discreet card + iGPU, or anything really.
optimus isnt the same as MVP, it just switched between the igpu and discrete card. it does not combine them.

dx12 mgpu should allow it to work this way but i dont think anyone has even tried.
 
optimus isnt the same as MVP, it just switched between the igpu and discrete card. it does not combine them.

It wasn't my intention to claim that it was the same. Mainly I was claiming that it became the predominant method that allowed a discreet GPU and iGPU to co-exist with the same display output. The added performance from the iGPU when using MVP was irrelevant so nothing was really lost.

dx12 mgpu should allow it to work this way but i dont think anyone has even tried.

The main thing that the DX12 multi-GPU disaster taught us is that punting all the work to game developers and expecting them to get it done is a great way to kill off a technology. We need a multi-GPU tech that isn't held hostage by the need for special code or developmental work. We need something at a more basic level, where a game doesn't need any special code regardless of how many GPUs there actually are.
 
not the same thing at all.
You're right, not sure where my head was at. Maybe that they seamlessly shared video output, not sharing work.

The igpu is so weak though, it wouldn't contribute much. I vaguely recall playing with hybrid crossfire which used the igpu and something like a 5450...
 
Back
Top