Half Life: Alyx

Edgar

2[H]4U
Joined
Jul 24, 2005
Messages
2,776
Been playing this for 2 weeks now. Man, it is one of the best games I've ever played. VR is so well made for it. It is really scary at times. But this is the kind of game I expect to play in VR. I'm in awe how good it is. Even on my quest 2 with airlink.
 
Alyx was by far the best VR experience I have had. Unfortunately, I haven't been able to find any other VR titles that come close.

(That said my headset has lived in a box for the last year so I could be out of touch with the current market.)
 
Alyx was by far the best VR experience I have had. Unfortunately, I haven't been able to find any other VR titles that come close.

(That said my headset has lived in a box for the last year so I could be out of touch with the current market.)

BONELAB if you have the stomach for it.
 
Alyx was by far the best VR experience I have had. Unfortunately, I haven't been able to find any other VR titles that come close.

VR app store discovery is crap

Given how the existing VR vendors do it, mixing up the mediore with the AAA. I cant wait for personal AI agents (like a VR optimzed version of ChatGPT) to properly tell me what games are similar to Alyx (several poorly advertised examples exist already!). You'd ask your personal agent, "Hello, I enjoyed Alyx but I disliked Resident Evil 4. I love science fiction but I'm not too big on sword/lightsaber battles, though I do enjoy shooting and first person. I prefer adventures over competitive. I don't want roller coaster nausea. Are there games similar to Alyx?", and then your personal AI agent would near perfectly recommend some titles.

But for now, we let the VR app stores dogfood us and we stop playing VR game after we buy the equivalent of 1982's Atari 2600 ET (or some storied crap game of lore) and have buyer's remorse that lasts so deeply for years, even if it's from the VR sickness or wasted lifetime forcing ourselves to play through, instead of remorse from money. And we just put away our VR headsets into the closet. I wish the VR app stores auto-curated better to our VR likes and dislikes. Barf.

If you loved Alyx, try these similar large open-space VR storytelling exploration adventures with good graphics:

- Star Wars: Tales from Galaxy's Edge + the new DLC. It's far less buggy now, and the DLC made it more interesting.
- Red Matter 2, better than the original Red Matter.
- Lone Echo, an indie game to whet your appetite for its sequel (Lone Echo II)
- Resident Evil 4 (unless you didn't like the dark/scary sections of Alyx)

All of these story adventures are in the equivalent of "Very Positive" or "Overwhelmingly Positive" on Steam (if on Steam), and of a 4.7/4.8+ star rating (if on Oculus Store) with between ~85%-95% at the review sites and MetaCritic, with the accolade numbers that are similar to Half Life Alyx.

All of these story adventures are "Comfortable" rated unlike VR rollercoasters, so if you stomached Alyx, you generally can stomach the above (at appropriate configuring, e.g. teleport vs freemove preference).

I've also played all of them. It's hard to meet the high bar of Alyx so I'm calling out the best contenders here.

Alternate Method of VR Game Discovery: Use an AI Agent and Converse to Refine

This is a new method of game discovery that became available to us in 2023. Jump onboard. Here's ChatGPT's recommendations. Its knowledge is only up to 2020-2021 VR games, but the last 2 of 3 games it is recommending is actually right up my alley. Already apparently doing a slightly better job than the app store vendors, Steam Store, Oculus/Meta Store, Sony PlayStation Store, Microsoft XBox Store, at least for non-current games. An adequate B+ grade job, ChatGPT!

1674703344745.png


It nailed 2 out of 3 VR games that I did feel was worthy of playing after Half Life Alyx. Not as good, but it keeps me enjoying VR if I had used ChatGPT to recommend me, rather than having played these games randomly.

So I'm impressed that ChatGPT's batting average is better than my Steam Store batting averge.

In five years, AI agents will be excellent videogame recommenders. Woo hoo.

IMO, Steam needs to add the new Microsoft/OpenAI GPT APIs, with a textbox in it for natural-language personalized conversational "recommend-me-a-game" queries. It might be semi-reliable today but it will only get better. Or part of the existing search box -- when Steam detects a sentence typed, it switches to a chatbot screen where you converse with a game-recommender ChatGPT-style AI that's even better than ChatGPT.

Heck, since Microsoft spent $10 billion on OpenAI for a future sequel to ChatGPT. And you know, since XBox live subscribers pay money, Microsoft might someday suddenly think that this is an excellent free inclusion (you'd speak into your gamepad's or headset's microphone, or if you were deaf/speech impaired, type it in). The game recommendations would show up instantly on screen after you queried. Who knows?

I would wager to predict that smarter AI-agent-based content recommenders (that are 10x to 100x better than ChatGPT in five years) might even be a revolution akin to the Cable -> Streaming migration. Just ask your favorite AI agent and get excellent batting average in liking the games that are recommended to you, rather than a giant corporation's decision of what to recommend you.

Games, movies, phone apps, virtual reality, travel destinations, etc.

Less disappointment lottery, more purchase percentage enjoyed.
 
Last edited:
Some day I should go back and finish this. Only HL game that I never finished - got bored about half way through, and when I came back, I had no idea what I was doing anymore.
 
AH nice I'm happy to hear this, probably the first game I'm going to check out once my headset comes in the mail.
 
I reeally enjoyed this game till i got stuck on that stupid puzzle where you rotate beams or whatever and put it away. I need to give it another go...
 
While playing HL Alex last week, a poison head crap jumps out of the ceiling tiles. Race to the nearest door and try to close it so I can reload pistol. Damn spider thing jumps and lands at my feet blocking me from closing the door. I make 3 actual kicks at the MFer before my brain realizes I am in VR and kicking isn't going to do shit.

Can't get this kind of immersion in any other gaming environment.....PERIOD!
 
While playing HL Alex last week, a poison head crap jumps out of the ceiling tiles. Race to the nearest door and try to close it so I can reload pistol. Damn spider thing jumps and lands at my feet blocking me from closing the door. I make 3 actual kicks at the MFer before my brain realizes I am in VR and kicking isn't going to do shit.

Can't get this kind of immersion in any other gaming environment.....PERIOD!
I’ll admit I’ve done the same thing myself.
 
I’ll admit I’ve done the same thing myself.
I just got my headset Saturday afternoon and played Sunday night, I've never experienced anything like this game. I hope PCVR stays alive, it's a little disheartening to look at the reddit community and see how dire they seem to think it is currently. At any rate, freaked out when a head crab jumped at me for the first time, I absolutely love the game, I just absolutely cannot play it anywhere close to bed time again haha. Feels pretty solid on the Reverb G2, tracking can get a tad wonky when I have my hands one in front of the other, but nothing to complain about. Actually fits my glasses (sort of) but I'm still ordering lenses. Really an incredible and transformative gaming experience. Looking forward to more, also really glad something gets me to stand after sitting and coding most of the day.
 
I just got my headset Saturday afternoon and played Sunday night, I've never experienced anything like this game. I hope PCVR stays alive, it's a little disheartening to look at the reddit community and see how dire they seem to think it is currently.
It sadly is dire, in a lot of ways. I see 3 VR users:
1. Casuals (beat saber, sound boxing, etc)
2. Normal gamers (Alyx, Lone Echo, Robo Recall, etc)
3. Sim folk.

1 is doing fine - that's what the quest excels at, in fact, in being usable by almost anyone, anywhere.
3 is doing fine, but is a niche of a niche and requires such a start up investment that it's insane for your average folk. So doesn't drive a lot of sales for the industry as a whole.

The problem is that group 2 isn't doing much right now. I honestly never finished Alyx (got bored). Never finished SW Squadrons. I played through Lone Echo one (great till the end, which sucked), robo recall (hehehe), and Arizona sunshine (lolzombie). The wife played Obduction and the one with the mouse, and a couple of horror games. That's two VR headsets that since have been used for option 1 - lots and lots of sound boxing and beat saber - and don't need upgrading because, well, anything can run those games. Those headsets are almost 6 years old. There hasn't been a game since that made me want to pick up the VR headset. So yeah... it's dire because the folks in item 3 upgrade regularly, 1 buys a quest and doesn't care about PCVR, and two... just isn't compelling enough for extended use in my experience, sadly.

At any rate, freaked out when a head crab jumped at me for the first time, I absolutely love the game, I just absolutely cannot play it anywhere close to bed time again haha. Feels pretty solid on the Reverb G2, tracking can get a tad wonky when I have my hands one in front of the other, but nothing to complain about. Actually fits my glasses (sort of) but I'm still ordering lenses. Really an incredible and transformative gaming experience. Looking forward to more, also really glad something gets me to stand after sitting and coding most of the day.
Glad you love it - sincerely hope that more development comes!
 
What may be good for PCVR is Valve's rumored in-headset PCVR

Essentially, a marriage of Valve's powerful Steam Deck II + and Valve's Index VR headset.

2024 or 2025?
 
What may be good for PCVR is Valve's rumored in-headset PCVR

Essentially, a marriage of Valve's powerful Steam Deck II + and Valve's Index VR headset.

2024 or 2025?
Not nearly powerful enough except as a quest competitor, unless they’re planning on doing something similar to airlink or the link cable or whatever it’s called.

Then again, maybe it would be for more of group 2 - but VR on Linux is a massive fools errand right now, so you have a chicken and the egg problem of a headset with few games, And few headsets sold to make games for.
 
VR on Linux
That's what Quest 2 is ... in a roundabout way.

Quest 2 runs Linux which Android runs on top of.

What will matter is the massive improvements in Proton stability, as well as Linux-native recompiles of the same PCVR software (with Windows clone helper libraries to minimize code changes).

This would be more stable than executables originally compiled only for Windows. Valve is writing tons of open source donations that seems to be approximately going in this path (ish). Game developers can either release one package for both, or minor-modification ports for both (for better native performance).

By the way, have you played Red Matter 2? Some shockingly incredible optimization there; it beats PCVR quality of 90% of year-2017 PCVR games, and competitive to many of today's indies. The developer of Red Matter 2, I believe did a number of gaming optimization of lore, and you see all those reviews/accolades of PCVR-quality in a Quest 2 in that specific wide-open space adventure. Especially when you're spacewalking outside that space station where you see planet Saturn in its full immersive glory.

(Just look Red Matter 2 reviews -- hundreds saying PCVR quality in this specific "FPS-adventure" genre game. Look at the reviews before commenting, or at least read RoadToVR commentary confirming PCVR look-n-feel on Quest 2 with Red Matter or see the "next gen Quest 2 graphics acclaims" on Google search. Believe me. I've played it. At least please click both links before you reply to this part.)

1675214573788.png


Not nearly powerful enough except as a quest competitor
It doesn't need to be powerful enough.

Just merely usable and good enough.

It just merely need to save the PCVR market for the rest of the more powerful PCVR market.

When we have our RTX 5090's or Radeon RX 9800 XT* Reloaded Edition in our gaming rigs, we might be thanking "Steam Vive Freedom Edition" for helping the market out. Being able to release the same software on steam for both traditional gaming-rig PCVR and standalone PCVR (PC built into headset).

Who knows?

(...*homage to the old Radeon 9800. Just a progression from today's 7800 -> 8800 -> 9800 as the GPU makers recycle old numbers...)

(...meanwhile, while 100% of my non-VR PC gaming is on my RTX rigs, 90% of my VR-related gaming is on a Quest 2 (80% completely standalone, 20% wireless PCVR streaming). After completing Red Matter 2, right now my current game is the DLC edition of Star Wars: Tales From Galaxy's Edge... Closest experience to Half Life Alyx... This speaks to the overall technical quality of the Quest 2, in its easy roomscale-anywhere capability. If anything is going to save the PCVR market, it will need to be helped with standalone PCVR systems in parallel to traditional PCVR...)


I can say this with a straight face: A well-designed standalone PCVR headset (a PC built into headset) will probably help sell 10x+ more PCVR games, helping developers to make more games for PCVR, helping the rest of us who also have powerful desktop towers, who want to also play traditional PCVR. The technology should be mature by approximately ~2025-ish to produce a standalone PCVR headset (aka PC built into headset capable of playing PCVR games).

Correct, you won't get as good graphics as an external computer for an excellent flagship PCVR game like Alyx which I absolutely enjoyed and played through But some incredible optimizations since, already make top-class standalone games out-compete average year 2017 PCVR graphics quality. Don't believe me? Get proper standalone games such as Red Matter 2 (better graphics than original) and Tales From Galaxy Edge (optimized DLC version), and you will see the light.

This is precisely the point I am making.
 
Last edited:
That's what Quest 2 is ... in a roundabout way.

Quest 2 runs Linux which Android runs on top of.

Sure- but that’s much like a console - games designed or cross compiled (thankfully it’s easier now based on the low level APIs to do so). The scope of most of those games are limited compared to what folks were planning for PCVR.
What will matter is the massive improvements in Proton stability, as well as Linux-native recompiles of the same PCVR software (with Windows clone helper libraries to minimize code changes).

This would be more stable than executables originally compiled only for Windows. Valve is writing tons of open source donations that seems to be approximately going in this path (ish). Game developers can either release one package for both, or minor-modification ports for both (for better native performance).
We’ll see. I hope you’re right.
By the way, have you played Red Matter 2? Some shockingly incredible optimization there; it beats PCVR quality of 90% of year-2017 PCVR games -- the developer of Red Matter 2, I believe did a number of gaming optimization of lore, and you see all those reviews/accolades of PCVR-quality in a Quest 2 in that specific wide-open space adventure. Especially when you're spacewalking outside that space station where you see planet Saturn in its full immersive glory. Just look at the reviews -- hundreds saying PCVR quality in this specific Alyx-genre game.
Don’t have a Quest 2 - won’t buy one till my original rift dies, so I’m still just using that. Can’t use any of the quest 2 only games, and don’t see a need to pick one up just for that set of games. Same reason I don’t own a console - either they get good enough to port to PC, or I’ll wait till someone emulates then.
It doesn't need to be powerful enough.

Just merely usable and good enough.

It just merely need to save the PCVR market for the rest of the more powerful PCVR market.
Sure. But will it? I somehow wonder… competing against a big name there with mega
When we have our RTX 5090's or Radeon RX 9800 XT* Reloaded Edition in our gaming rigs, we might be thanking "Steam Vive Freedom Edition" for helping the market out. Being able to release the same software on steam for both traditional gaming-rig PCVR and standalone PCVR (PC built into headset).

Who knows?
Good dream. I really hope you’re right.
(...*homage to the old Radeon 9800. Just a progression from today's 7800 -> 8800 -> 9800 as the GPU makers recycle old numbers...)

(...meanwhile, while 100% of my non-VR PC gaming is on my RTX rigs, 90% of my VR-related gaming is on a Quest 2 -- right now my current game is the DLC edition of Star Wars: Tales From Galaxy's Edge... Closest experience to Half Life Alyx... This speaks to the overall technical quality of the Quest 2, in its easy roomscale-anywhere capability. If anything is going to save the PCVR market, it will need to be helped with standalone PCVR systems in parallel to traditional PCVR...)
I barely play VR anymore. Except beat saber. Too much effort. Even with a dedicated system - after work I don’t feel like dancing around on my feet anymore unless I’m slicing a block 😂

But I agree that stand-alone is likely the bridge.
 
But I agree that stand-alone is likely the bridge.
Let's remind ourselves: Valve is not Facebook.

That's enough to make the enthusaist population and enthusiast YouTubers (console and PC) buy a standalone headset and rave about it.

And influencer everybody else to buying it too. Both console gamer and PC gamer audience combined.

Especially if Valve design it in a way to run both APK-based games (Quest 2 API emulation) and EXE-based games (Wine/Proton emulation) for easy port / zero port of any Android VR and Windows PCVR games. Developers will race to do the few days to port games to run optimally on these.

Initially you may have to omit some obscure Microsoft APIs or Meta APIs but extra tools can be made (including AI-based code-porting tools) to mop up the incompatibility mess.

There's many innovations happening concurrently that makes this more realistic than you think.

Valve recent home run successes in hardware:
Valve originally failed on a lot of hardware (e.g. original Steam Box) but they've hit some near home runs with Valve Index and Valve Steam Deck. Not perfect, but they have become relatively profitable products for Valve. And both Index / Steam Deck is mergeable (by 2025-ish) into a PCVR standalone, especially if a future Qualcomm SnapDragon becomes M2-class-league by 2025.

Look at how certain Steam Deck games became more stable after developers obtained a Steam Deck and re-optimized their games to run better on the Steam Deck. So stability of certain games is light years better. Valve just needs to incubate by opening those barn doors wide.

And look at how Valve beautifully redesigned the Steam UI for Steam Deck. Guess what it looks like? Oculus Store. Another hint.

The question is the price -- how low Valve dares to price the Valve Index Wireless Edition or Valve Index Freedom Edition or whatever they decide to call it.

And heck, if the buyers fall in love, they can buy a VR rig and stream over built-in WiFi 6. Just because they have standalone, doesn't mean they are not held back from buying a PC to improve the graphics.

Also mobile graphics have improved much faster than PC graphics. The delta between high-end-league mobile graphics and high-end-league PC graphics is now less than a decade apart. By 2025, it's already almost a decade since 2017.

Killer app for some of us: The same Steam game download with Steam Cloud support would work both standalone PCVR and external-gaming-rig PCVR, and cloud-sync, so you can continue playing your savegame at your family's or at a hotel. And then when you come home, you resume playing your game via full RTX 5090 PCVR glory. No different from syncing two gaming PC to the same savegames.

Steam Deck is just Version 1.0, and they want to do a 2nd edition first. The Valve Index standalone (with the guts of Steam Deck 3.0 built into headset) will be Version 3.0 by the time. That's relative maturity, I'd hope.

Priceless business plan masterpiece, if you ask me -- if Valve pulls it off in a stable way -- as long-established game store gorilla. Valve may become the figurative PCVR version of the Nintendo that saves the market from the Video Game Crash (historical version of PCVR slow-death metaphor).

Now if you want to read more:
___

Frame generation context:
DLSS 5.0 / FSR 5.0 / XeSS 5.0 will be out by then, technologies that did not exist in 2017, and may be combined in a lagless way with reprojection. A three-layer retroactive reprojection algorithm can rewind-reproject a laggy-rendered frame to 0ms latency, thanks to headtracker coordinates backfeeding into the reprojector many times between original rendered frames. Even 100ms interpolation can be rewound to 0ms reprojection via retroactive reprojection, so we might go multilayered frame rendering pipelines (like the GPU equivalent of video compression interpolated frames heirarchy I-Frame, B-Frame and P-Frames -- Netflix is only 1 original non-interpolated frame per second now, for example, but it does it so perfectly.

Now imagine this happening to frame generation technologies becoming perceptually lagless AND artifactless). The industry is already working on these techs. This video is quite fun to watch, but this only barely scratches the surface. It can be applied to UE5-quality frames, for example. And there's nothing preventing reprojection from "rewinding" interpolation latency for 6dof positionals (mouselooks, strafes, etc), as one example, so that few original frames can be done per second, and the rest of the frames being reprojection (or reprojection+interpolation hybrids), in a hypothetical DLSS 5.0 / FSR 5.0 / XeSS 5.0 with some new algorithm that also accountes for enemy-object movements too.

Mind you, this is not a perfect scenario, original rendered frames are better, but you can render better-than-PCVR frames at original PCVR resolutions, easily at less than 1/10 to 1/100th the power budget, enough to fit in mobile budget. Some crazy bleep is already being experimented with.

Software developer context:
Being willing to adopt new technologies, I already use GitHub copilot (AI-based pair programmer) for my Visual Studio development and I usually accept roughly ~20% of its autocompletes, and AI-based code porting could also really help there -- AI is actually already a near-perfect code-porting assistant at medium scopes. A technology that didn't exist five years ago. The ability to port PCVR in a crashproof optimized way to Linux, will become vastly easier in five years -- once the ability of AI to recognize the unstable parts of the software is vastly improved. Some game studios run their code through such an AI system to find common security holes (things like buffer overruns in C++) missed by their developers, but it can very well be repurposed into recognizing what runs stable under Proton, etc, given an appropriate Steam Deck training set.

As a software developer, I am already witnessing some movements towards AI-based code-stabilizing assistants, though it's still early days but given AI innovation is moving 10 years per year at the moment, I have no doubt it will play a role into speeding up viability. Even NVIDIA is now doing AI-based graphics driver optimization but think about it; the same kind of optimizer (trained on a different training set) can also help fix Proton crashes, help improve game performance, and/or help game porting. And raise the percentage of Steam Deck reliability faster with fewer developer hours by the Steam Deck 2.0 cycle. What you may not know now, is that there's already been AI-assisted code commits to Linux kernel and Microsoft windows, where one of the code commiters' AI coding assistant discovered a faux pas (security issue or race condition), which influenced a new code commit to fix said bug.

Big companies already regularly execute AI-based security-hole-scanners on their software, as a backup layer on existing security-specialized software engineers. This type of bleep is only going to become more common, especially as I may see the autocompletes become good enough to accept 30-40% of them within a year in my Visual Studio journey, etc, and modern-day developers begin installing more AI-based coding assistants in all lines of software development work. All of the above combined, viewed in a large-software porting context or API simulation context, Wine/Proton reliability will continue to increase (thanks to AI assitants), while games developers will also avoid using APIs that fail in Wine/Proton (thanks to AI alerts), with both ends of the problem candle burning towards major reduction/elimination of problems. You're hearing from me -- a software developer already using AI-based coding assistants.

The bottom line is outdated AI coding doubter threads as recent as 2021 is just totally obliterated -- pre-ChatGPT and pre-Github Copilot (with some vastly superior versions in the works) -- in less than two years. Today's developers now must run hard and keep up by integrating AI quickly into their workflows to prevent being laid off. With my actual experience in AI assisting my coding now, I'm seeing some scary Jacquard Loom type moments playing out in real time. (If you're a software developer, better get ready to keep up with the times. Being a star programmer in a common programming environment in a rich country is no longer enough to maintain the same salary in a few years -- you need to be a star programmer that is also concurrently accepting, trying out, and at least selectively using AI tools at the same time in a highly "use AI correctly" skilled way).

But the silver lining is that concurrently, AI is going to make VR programming a hell lot easier inside the window of this decade -- it already has begun to slowly lower the bar of difficulty in VR programming. This will help all game developers and studios generate (and port) more quality AAA VR content, especially in a future market flush of low end and high end options (Valve, Apple, etc). (Even Apple AR/VR would obviously quickly gain a third party appstore app that is just a streamer to enable wireless WiFi 6 PCVR to a PC. Nobrainer fairly simple app to pull Apple's both expensive headset and cheap headset into the PCVR world). If this pace keeps up, by next decade, AI coding will probably be smart enough to figure out how to port a non-VR game to be VR-friendly. Maybe not. But just think. With both the low and high end covered properly, AND the coding-difficulty playing field levelled a bit, PCVR will probably be reborn as standalone PC-optional standalone PCVR.

Apple-style "One More Thing":
Half Life Alyx Standalone Edition comes preinstalled for free on Valve's standalone, with graphics roughly equal to GTX 1080 Ti. (Hint: Red Matter 2 is "already there", so this is actual real-world realistic if certain Proton instability problems are solved and/or recompiles to Linux native it with helper API libraries). Bookmark this post, and you'll see how uncannily accurate this prediction is by between 2025-2030 if a nuclear World War III does not happen.

Now you see the converging factors to recognize the collision that resurrects "PCVR" popularity (in a roundabout way).

Bottom line: We're just in a PCVR lull.

[Edit: This post has been massively expanded]
 
Last edited:
I honestly hope you're right. VR was mindblowing at first - but here we are, 6 years later, and I play beat saber... and that's it. There hasn't been a game compelling enough to make me want to put the headset on for more than 20 minutes since Lone Echo, and given how utterly horrible the end of that game was (especially since the rest was so damned good), it soured me on the whole experience... and that was three years past. Not a single VR game since has made me say "ok, that's worth picking up and dealing with."

And sadly no, Alyx wasn't that game... just didn't do anything for me - and this is someone that hauled that damned gnome through all of Episode 2 to get that achievement.
 
Let's remind ourselves: Valve is not Facebook.

That's enough to make the enthusaist population and enthusiast YouTubers (console and PC) buy a standalone headset and rave about it.

And influencer everybody else to buying it too. Both console gamer and PC gamer audience combined.

Especially if Valve design it in a way to run both APK-based games (Quest 2 API emulation) and EXE-based games (Wine/Proton emulation) for easy port / zero port of any Android VR and Windows PCVR games. Developers will race to do the few days to port games to run optimally on these.

Initially you may have to omit some obscure Microsoft APIs or Meta APIs but extra tools can be made (including AI-based code-porting tools) to mop up the incompatibility mess.

There's many innovations happening concurrently that makes this more realistic than you think.

Valve recent home run successes in hardware:
Valve originally failed on a lot of hardware (e.g. original Steam Box) but they've hit some near home runs with Valve Index and Valve Steam Deck. Not perfect, but they have become relatively profitable products for Valve. And both Index / Steam Deck is mergeable (by 2025-ish) into a PCVR standalone, especially if a future Qualcomm SnapDragon becomes M2-class-league by 2025.

Look at how certain Steam Deck games became more stable after developers obtained a Steam Deck and re-optimized their games to run better on the Steam Deck. So stability of certain games is light years better. Valve just needs to incubate by opening those barn doors wide.

And look at how Valve beautifully redesigned the Steam UI for Steam Deck. Guess what it looks like? Oculus Store. Another hint.

The question is the price -- how low Valve dares to price the Valve Index Wireless Edition or Valve Index Freedom Edition or whatever they decide to call it.

And heck, if the buyers fall in love, they can buy a VR rig and stream over built-in WiFi 6. Just because they have standalone, doesn't mean they are not held back from buying a PC to improve the graphics.

Also mobile graphics have improved much faster than PC graphics. The delta between high-end-league mobile graphics and high-end-league PC graphics is now less than a decade apart. By 2025, it's already almost a decade since 2017.

Killer app for some of us: The same Steam game download with Steam Cloud support would work both standalone PCVR and external-gaming-rig PCVR, and cloud-sync, so you can continue playing your savegame at your family's or at a hotel. And then when you come home, you resume playing your game via full RTX 5090 PCVR glory. No different from syncing two gaming PC to the same savegames.

Steam Deck is just Version 1.0, and they want to do a 2nd edition first. The Valve Index standalone (with the guts of Steam Deck 3.0 built into headset) will be Version 3.0 by the time. That's relative maturity, I'd hope.

Priceless business plan masterpiece, if you ask me -- if Valve pulls it off in a stable way -- as long-established game store gorilla. Valve may become the figurative PCVR version of the Nintendo that saves the market from the Video Game Crash (historical version of PCVR slow-death metaphor).

Now if you want to read more:
___

Frame generation context:
DLSS 5.0 / FSR 5.0 / XeSS 5.0 will be out by then, technologies that did not exist in 2017, and may be combined in a lagless way with reprojection. A three-layer retroactive reprojection algorithm can rewind-reproject a laggy-rendered frame to 0ms latency, thanks to headtracker coordinates backfeeding into the reprojector many times between original rendered frames. Even 100ms interpolation can be rewound to 0ms reprojection via retroactive reprojection, so we might go multilayered frame rendering pipelines (like the GPU equivalent of video compression interpolated frames heirarchy I-Frame, B-Frame and P-Frames -- Netflix is only 1 original non-interpolated frame per second now, for example, but it does it so perfectly.

Now imagine this happening to frame generation technologies becoming perceptually lagless AND artifactless). The industry is already working on these techs. This video is quite fun to watch, but this only barely scratches the surface. It can be applied to UE5-quality frames, for example. And there's nothing preventing reprojection from "rewinding" interpolation latency for 6dof positionals (mouselooks, strafes, etc), as one example, so that few original frames can be done per second, and the rest of the frames being reprojection (or reprojection+interpolation hybrids), in a hypothetical DLSS 5.0 / FSR 5.0 / XeSS 5.0 with some new algorithm that also accountes for enemy-object movements too.

Mind you, this is not a perfect scenario, original rendered frames are better, but you can render better-than-PCVR frames at original PCVR resolutions, easily at less than 1/10 to 1/100th the power budget, enough to fit in mobile budget. Some crazy bleep is already being experimented with.

Software developer context:
Being willing to adopt new technologies, I already use GitHub copilot (AI-based pair programmer) for my Visual Studio development and I usually accept roughly ~20% of its autocompletes, and AI-based code porting could also really help there -- AI is actually already a near-perfect code-porting assistant at medium scopes. A technology that didn't exist five years ago. The ability to port PCVR in a crashproof optimized way to Linux, will become vastly easier in five years -- once the ability of AI to recognize the unstable parts of the software is vastly improved. Some game studios run their code through such an AI system to find common security holes (things like buffer overruns in C++) missed by their developers, but it can very well be repurposed into recognizing what runs stable under Proton, etc, given an appropriate Steam Deck training set.

As a software developer, I am already witnessing some movements towards AI-based code-stabilizing assistants, though it's still early days but given AI innovation is moving 10 years per year at the moment, I have no doubt it will play a role into speeding up viability. Even NVIDIA is now doing AI-based graphics driver optimization but think about it; the same kind of optimizer (trained on a different training set) can also help fix Proton crashes, help improve game performance, and/or help game porting. And raise the percentage of Steam Deck reliability faster with fewer developer hours by the Steam Deck 2.0 cycle. What you may not know now, is that there's already been AI-assisted code commits to Linux kernel and Microsoft windows, where one of the code commiters' AI coding assistant discovered a faux pas (security issue or race condition), which influenced a new code commit to fix said bug.

Big companies already regularly execute AI-based security-hole-scanners on their software, as a backup layer on existing security-specialized software engineers. This type of bleep is only going to become more common, especially as I may see the autocompletes become good enough to accept 30-40% of them within a year in my Visual Studio journey, etc, and modern-day developers begin installing more AI-based coding assistants in all lines of software development work. All of the above combined, viewed in a large-software porting context or API simulation context, Wine/Proton reliability will continue to increase (thanks to AI assitants), while games developers will also avoid using APIs that fail in Wine/Proton (thanks to AI alerts), with both ends of the problem candle burning towards major reduction/elimination of problems. You're hearing from me -- a software developer already using AI-based coding assistants.

The bottom line is outdated AI coding doubter threads as recent as 2021 is just totally obliterated -- pre-ChatGPT and pre-Github Copilot (with some vastly superior versions in the works) -- in less than two years. Today's developers now must run hard and keep up by integrating AI quickly into their workflows to prevent being laid off. With my actual experience in AI assisting my coding now, I'm seeing some scary Jacquard Loom type moments playing out in real time. (If you're a software developer, better get ready to keep up with the times. Being a star programmer in a common programming environment in a rich country is no longer enough to maintain the same salary in a few years -- you need to be a star programmer that is also concurrently accepting, trying out, and at least selectively using AI tools at the same time in a highly "use AI correctly" skilled way).

But the silver lining is that concurrently, AI is going to make VR programming a hell lot easier inside the window of this decade -- it already has begun to slowly lower the bar of difficulty in VR programming. This will help all game developers and studios generate (and port) more quality AAA VR content, especially in a future market flush of low end and high end options (Valve, Apple, etc). (Even Apple AR/VR would obviously quickly gain a third party appstore app that is just a streamer to enable wireless WiFi 6 PCVR to a PC. Nobrainer fairly simple app to pull Apple's both expensive headset and cheap headset into the PCVR world). If this pace keeps up, by next decade, AI coding will probably be smart enough to figure out how to port a non-VR game to be VR-friendly. Maybe not. But just think. With both the low and high end covered properly, AND the coding-difficulty playing field levelled a bit, PCVR will probably be reborn as standalone PC-optional standalone PCVR.

Apple-style "One More Thing":
Half Life Alyx Standalone Edition comes preinstalled for free on Valve's standalone, with graphics roughly equal to GTX 1080 Ti. (Hint: Red Matter 2 is "already there", so this is actual real-world realistic if certain Proton instability problems are solved and/or recompiles to Linux native it with helper API libraries). Bookmark this post, and you'll see how uncannily accurate this prediction is by between 2025-2030 if a nuclear World War III does not happen.

Now you see the converging factors to recognize the collision that resurrects "PCVR" popularity (in a roundabout way).

Bottom line: We're just in a PCVR lull.

[Edit: This post has been massively expanded]
What kind of programming do you do? Chat GPT for Unity (C#) hasn't been much of a help for us at work aside from getting some boilerplate answers I could get on stack overflow. I work exclusively in Unity at the moment so I'm still kind of waiting on it to be worthwhile. Also without it understanding software architecture and my code base it's not as helpful as it could be, but I'm sure some future case that will be true (not with LLMs, a future AI though, LLMS don't understand anything they're interpreters basically). I'm not sure if it's going to speed up efficiency and make engineers get software out more quickly, or companies will keep similar deadlines and lay off more engineers / developers. The best thing I had it do so far was check my math when I was implementing a bit of trig for spawning some items in a a 2d game I'm working on. Co-pilot seemed less impressive TBH.
I just found the shotgun in Alyx, I didn't realize how absolutely terrified I'd be. I wish I knew what kind of FPS I was getting, I'll have to do some research into monitoring tools, it seems a little slow from a 6800 XT and 5800x, I could be wrong though.
 
<Slightly Offtopic>

Fun side alert, other non-programmers, just skip past

What kind of programming do you do? Chat GPT for Unity (C#) hasn't been much of a help for us at work aside from getting some boilerplate answers I could get on stack overflow. I work exclusively in Unity at the moment so I'm still kind of waiting on it to be worthwhile.
I don't use Chat GPT (not as good as Copilot for my specific needs).
I use the paid subscription (ugh) version of Github Copilot, which is much more useful, even if it's only based off GPT-3 rather than GPT-3.5.

It's only 20% useful to me at the moment, but that 20% is a big timesaver already. You can press a hotkey to see 10 code autocomplete suggestions, much of which can be self-trained on your earlier programmings (recognize your coding patterns), if you permit it to do so. It suggests lots of dumb code, but it also suggests a lot of code that saves lots of keypresses (~90%) for the 20% of the autocompletes I decide to accept.

It means my coding hours is reduced slightly, allowing me to optimize other code better, or spend more time working on other coding. Most of the time, it's skeletoning-out what I'm intending to do.

Also, GitHub CoPilot starts out terrible at first usually, but keep using it, and recognize some new habits (write your comments a little sooner than usual), since CoPilot recognizes the comments in your entire project (module comments, class comments, function/method comments, per-line comments), as context to refine its suggestion. It's roughly GPT-3.0 "smart" in reading your comments (alongside already-written code) to guess your next code in ephemeral Intellisense-style popups that you can ignore. They haven't even upgraded it to ChatGPT level of interpreting your code comments (GPT 3.5) but that's going to hit this year. Microsoft is spending $10 billion on this bleep -- you can bet that they'll upgrade CoPilot to newer GPT engines.

You can instead do vice versa (write your function first, and let it autocomment your existing function instead if you prefer to write without comments first) if you simply only want to use CoPilot as a comment-writing timesaver instead of a code-writing timesaver. For that purpose it is not all that good yet being GPT 3.0 instead of GPT 3.5 based. But it works that way if you prefer. However, I save more keypresses if I write at least 50% of my comments first and code after. It's for now, better to let CoPilot autosuggest some of my code as an Intellisense-style autocomplete system. You can have a 5% acceptance rate or a 25% acceptance rate (aka 25% of what GitHub CoPilot exactly correctly matches what you're about to write next), depending on your language and how much/often you pre-write your comments before some of your code (and how willing you are about changing your code pre-commenting habits instead of post-commenting). It has caused a minor change in my coding habits: Now about 50% of my comments are typed before I type code. Doing this alone, roughly doubles the GitHub CoPilot quality.

You can even tell CoPilot what to do by adding a function comment (///) and describing the purpose of your function very clearly (ChatGPT style), and then beginning to type the function. CoPilot recognizes the context of the purpose of the function and starts suggesting autocompletes (IntelliSense style), from 1 to 100 lines worth of "suggested skeleton code" which I can instantly accept by tapping the TAB key -- like I'd do to autocomplete a function name, except it autocompletes several lines of code it tried to guess you were about to write next. (Or you press another key to see another 10 alternate suggestions spontaneously). All in realtime, you have text autocompletes showing up which you can ignore. Heck, you can completely ignore GitHub CoPilot and accept 0% of its suggestions -- it's integrated only as an IntelliSense enhancement.

Basically the way CoPilot works is that it's just an IntelliSense enhancement that doesn't get in the way. It's a kind of supercharged IntelliSense that may add 1 or 5 or 10 or 100 lines of code it thinks you're about to write next. You can ignore its suggestions and keep typing your own code, the way you want. But then suddenly you glance that GitHub CoPilot suggested something great, and you just hit Tab (like you'd do with IntelliSense function name completes) to automatically see multiple lines of text automatically insert right there, directly at your cursor.

I wouldn't use it in high security contexts or mission critical stuff (it can spew insecure code), as some of the spew from Copilot is junk. But the quality is improving very rapidly, and multimodal coding AIs (e.g. combination of Copilot AI + security hole checker AI + port AI etc) will improve even better in the future.

A skilled programmer can glance at the Copilot spew, accept the close-match of what you're about to code anyway, and then immediately begin to edit to fix its flaws, saving you ~50-100% of typing time on 20% of code autocompletes. On average, 90% timesaver on 20% of your coding, if you use the subscription version properly in languages that it has an excellent corpus on. Its corpus on C# is not as good as, say, C++, but it is a timesaver in all languages when you use it correctly.

An inexperienced programmer will fumble it bad, but if you're a star programmer, you instantly know if the code is correct or not and simply use it as a typing-time-saver. Improving your average code quality + typing speed with AI autocompletes is a new skill now worth training for. Practice correcting the AI mistakes, and the code looks exactly what you originally wrote, except in only 10-50% the original coding-from-scratch time for that batch of lines.

Longer term, they will combine all the coding assistant AIs into multimodal coding assistant AIs that self-check each other, like security-hole-detector AIs, coding-mistake AIs, etc. (For those unaware, multimodal AIs is metaphorically approximately like the different sections of a human brain that work in concert, or a human that knows how to use a larger number of tools, like the ability to use a graphing calculator to self-doublecheck math, or the reading skills to look up a computer programming handbook to doublecheck things, etc). Open AI is already beginning to talk about "multimodal AIs" which is one of the major next "giant" steps for AIs.

I imagine this is part of future versions of Github Copilot. Even in the 6 months of subscription, I have seen its quality noticeably improve. Even if it's say from ~15% to ~20% acceptance -- in otherwords, 20% of my current code is now lightly-modified Github Copilot autocompletes, reducing the number of keypresses I have to do to type out code. Between GitHub copilot autocompletes, and traditional IntelliSense autocompletes, my number of keypresses per kilobyte of code is falling rapidly.

If you become good at your function pre-commenting, it autosuggests code better. A good programmer already can write 25% more code in the same amount of time than before, if you decide to pre-comment your function/method/class before your write your function/method/class, and then CoPilot can speed up your coding by 25%+ instead of only 10% or 15% or 20%. Getting into the habit of writing very clear comments before writing your function is not a usual habit most programmers have (commenting often occurs at the end) but you should begin to do so as a good habit from now on -- in order to help AI-based pair programmers to "guess" the next line of your code vastly better (because like GPT-3/ChatGPT, it reads at least your function draft description comment to help guess what code you're about to write next). It has advanced far enough that it even knows what is in the other modules and files, or elsewhere in the same file, so will correctly know to call your helper functions/methods/other of your classes that you already wrote elsewhere, so, that's a neat CoPilot coding awareness feature. But it does require some minor coding habit changes like writing comments a little sooner than you normally would, as that helps CoPilot become more aware of the purpose of your code line | your function/method | your file | your entire project.

Remember, CoPilot doesn't interfere with your programming, it is just like a supercharged real time IntelliSense system that operates continually even when you're not typing a function name or API name and sometimes real-time auto-suggests fairly gigantic code blocks that you might glance at for only one second, only to disappear as you continue typing if you're ignoring the IntelliSense-style suggestions. You can rewind easily simply by backspacing to the point where the tantalizing code autocomplete appeared, and you spend a few more seconds glancing to see if it's a timesaver or not. Hit Tab, and a few lines or even a half screenful of AI code inserts at your cursor, as naturally as autocompleting an IntelliSense-detected function/API name. No intervention required unless you want to accept the current ephemeral code suggestion (showing in the same style as an IntelliSense autocomplete), or to force it to suggest 10 other alternatives. If you hate IntelliSense, this is probably not going to be easy, but if you're used to IntelliSense, then CoPilot fits like a glove.

There are some controversies about this, and lawsuits involved by concerned coders not wanting their code to be vaccuumed as learning material for CoPilot, which is understandable. It undoubtedly include whether it's autosuggesting public code or not (there's a feature to turn that off, to force it to autosuggest more original code. I've toggled this feature on to be respectful to FOSS authors of widely-disagreeing licenses). While you might not want to use AI tools until you're forced to do so, oneself better still get prepared and use it; this is still an emerging Jacquard Loom moment, for those historians amonst y'all... In a decade, coding will be very different, much like pre-IDE and post-IDE development eras. There is no choice but to adapt.

And witnessing such real-time improvement to code autocompletes inside the window of a year, is a bit scary. I agree with those people who are warning "get ready or be laid off". It's THAT dangerous/disruptive. Like an "Internet versus No Internet" situation. But don't worry, human programmers won't be obsolete (even if we someday become figurative conductors of an AI coding orchestra).

There's the dystopia scenario (AI writes everthing, laid of programmers) and utopia scenarios (game developers can be on time, on schedule, no overtime needed, clock out at 5:00pm without overtime) but the real story will be in between. AI is merely a tool that humans manipulate, for both good and bad, but we can't ignore AI's emerging role in software development.

Given today we handle more code per programmer than yesteryear (thanks to the innovation of IDEs, massively scalable code repos, and code management tools we all come to love that did not even exist in 1982). And amount of code per developer, alas, will exponentially grow per developer in the era of AI-assisted computer programming. New forms of compartmentalization and structuring will emerge to keep it all manageable without info overload. Most software developers of the future will eventually need to accept AI-based tools into their workflow, as powerful as the innovation of IDE environments and high-level languages replacing yesterday's assembly programming.

Now, how does this indirectly affect VR development:

Even for non-VR, today's developers can create gigabytes-sized games with smaller teams than what was needed for a sub-64 kilobyte game for an 8-bit machine. But VR games are, alas, extremely demanding to create when you need to create an AAA megaproject like Half Life: Alyx. Rremember that VR versions of games tends to be bigger (and longer to debug and longer to test) than their equivalent non-VR versions of games of similar length. This put a high barrier of entry to VR programing.

But as AI tools help bring this to more manageability with developers, this helps VR programming, and thus makes more likely Alyx-style AAA games to be developed under an indie team rather than only a big corporation. Whether you're converting non-VR, or porting, or writing from scratch, AI is going to really accelerate that more and more as time passes. Towards the end of the decade, indies could be more able to create fun experiences that don't bomb like the Atari E.T. cartridge. Very disruptive change to industry.

End of the day, things are moving sufficiently fast to the point where that vastly inside of this very (otherwise terrible) decade, giant Holodeck-quality VR games will be bit-by-bit easier to code / port / debug than it was. Today it's 20% for a person like me who's VERY picky about how I code. Some people are already hitting 40%. By end of decade, it may even 90%, who knows? At 90%, that means code output per developer per week may even grow almost 10x by 2030 (or beyond), pulling AAA-quality VR sized games into practical economics. Like boiling a frog, don't be caught off guard by this Jacquard Loom moment by waiting 5 years before adopting AI tools.

Funny tangent that this discussion spins off to, but very relevant to Valve, to Steam, and to VR, and to their ambitions of producing a stable standalone PCVR headset -- a complex undertaking that appears very feasible inside of this decade.

</Slightly Offtopic>
 
Last edited:
What kind of programming do you do? Chat GPT for Unity (C#) hasn't been much of a help for us at work aside from getting some boilerplate answers I could get on stack overflow. I work exclusively in Unity at the moment so I'm still kind of waiting on it to be worthwhile. Also without it understanding software architecture and my code base it's not as helpful as it could be, but I'm sure some future case that will be true (not with LLMs, a future AI though, LLMS don't understand anything they're interpreters basically). I'm not sure if it's going to speed up efficiency and make engineers get software out more quickly, or companies will keep similar deadlines and lay off more engineers / developers. The best thing I had it do so far was check my math when I was implementing a bit of trig for spawning some items in a a 2d game I'm working on. Co-pilot seemed less impressive TBH.
I just found the shotgun in Alyx, I didn't realize how absolutely terrified I'd be. I wish I knew what kind of FPS I was getting, I'll have to do some research into monitoring tools, it seems a little slow from a 6800 XT and 5800x, I could be wrong though.
That's exactly what I used for Alyx (6800 XT & 5800X). I don't know what FPS I got, either, but I had to turn a couple of graphics settings down 1 notch. After that, it was very smooth.
 
W
That's exactly what I used for Alyx (6800 XT & 5800X). I don't know what FPS I got, either, but I had to turn a couple of graphics settings down 1 notch. After that, it was very smooth.
What headset are you on>?
 
That's exactly what I used for Alyx (6800 XT & 5800X). I don't know what FPS I got, either, but I had to turn a couple of graphics settings down 1 notch. After that, it was very smooth.
So there was a console command to stop it from doing dynamic resolution scaling which causes all sorts of frame time inconsistencies. Once I did that, I cranked everything up to ultra and it feels buttery smooth. One other thing I'll say, playing on the HP Reverb G2 makes me wish really good OLED headsets were a thing in dark areas. I found the flashlight before I quit for the night, and having grey instead of black is massively immersion breaking. Still love the headset but you can tell its early days for the tech for sure.

Give the wiki a shot for console commands: https://www.pcgamingwiki.com/wiki/Half-Life:_Alyx

Here's the dynamic resolution link: https://www.pcgamingwiki.com/wiki/Half-Life:_Alyx#Disable_dynamic_resolution_scaling
 
Back
Top