Nvidia ACE, generative AI dialogue in games

LukeTbk

Supreme [H]ardness
Joined
Sep 10, 2020
Messages
6,069

NVIDIA ACE Enhanced with Dynamic Responses for Virtual Characters​


View: https://youtu.be/cSSSn10HgZA

the impact in a game seem more obvious than say raytracing, Imagine in baldur gates 3 if your characther was not the only strangely silent one but would talk in a way that match its custom race-gender-personality, state,

The quality of delivery is not there yet (but a big studio game dev can often make a better job than nvidia making a "quick" demo), but the face-lips reading is already not too far away to think that by 2025 it could be good enough.

If the demo does not cheat in terms of lag between question and answer, that quite impressive speed.

It seems to have free open trained data on huggingface:
https://github.com/NVIDIA/NeMo
https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia

And it seems that although if we want to do your own training it requires CUDA, but that using it, the inference can be done on CPU, AMD gpus, Apple silicon, etc...

p.s. did not find the thread about it after searching a bit, could just move it.
 
Like most LLM-based AIs, it will get to a point where it's somewhat passable for instanced character interactions. It will be a looong time, however, for it to emulate the multi-layered semantics, character development, and overally plot arching only humans can write. For this, you need a multi-modal approach that fits into a narrative structure -- not something we're close to doing programatically or computationally (at least for edge computing).
 
Back
Top