NVIDIA Unveils RTX Voice, AI-based Audio Noise-Cancellation Software

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
Might be cool, i'll have to check it out

"Perhaps the biggest gripe about attending office calls and meetings from home these days is the background noise - everyone's home. NVIDIA developed an interesting new piece of free software that can help those on desktops cut out background noise in the audio, called RTX Voice, released to web as a beta. The app uses AI to filter out background audio noise not just at your end, but also from the audio of others in your meeting as you receive it (they don't need the app running on their end). The app leverages tensor cores, and requires an NVIDIA GeForce RTX 20-series GPU, Windows 10, and GeForce drivers R410 or later. RTX Voice runs in conjunction with your meetings software. Among the supported ones are Cisco Webex, Zoom, Skype, Twitch, XSplit, OBS, Discord, and Slack. For more information and FAQs, visit the download link.

DOWNLOAD: NVIDIA RTX Voice beta"


https://www.techpowerup.com/265906/...ce-ai-based-audio-noise-cancellation-software
 
I really want to give this a shot. It sounds very useful and interesting.

I'm also afraid to give this a shot. When I do background replacements in Zoom, which are GPU-accelerated, my machine doesn't give a BSOD. Instead, it does an instantaneous straight-to-black, PSU-clicking, full on hard reset.

So I think I'm kind of mixed on this.
 
Very cool! I am sure many more AI software will be coming for RTX GPU's as time passes.
 
Tested this out, it works surprisingly well. I have blue switches on my keyboard and they disappear when using the mic now. Only thing is it literally crashed one of my games and I was confused until turning it off and boom game worked again. The rtx portion of my GPU is finicky. Also trying out Minecraft RTX with this for streaming.
 
Oh man, Jensen can listen to all my Apex Legends games if this works.

For some reason, all my best teammates have an open mic 6 inches from their MX Blues...
 
Will be trying this tomorrow for work. If it works well its time to update my work laptop.
 
Oh man, Jensen can listen to all my Apex Legends games if this works.

For some reason, all my best teammates have an open mic 6 inches from their MX Blues...
I setup my mic that way, lol. While the cardioid profile has done some work in keeping the KB noises out, this seems to work far better. Working on replacing the static mic stand with a boom arm.
 
In theory the idea isn't bad, provided that it both takes place locally (ie on your system entirely) regarding the AI and perhaps more importantly does not send out nor leak data to others save for a reasonable amount of telemetry that can be easily opted-in if desired.

That said, in practice this is more Nvidia proprietary garbage just like everything from GSync, to PhysX, CUDA, and the myriad of other projects Nvidia seems to prefer. The fact this only works on not only Nvidia hardware, but exclusively RTX cards is curious - I wonder how much of this is a technical necessity vs an intentional latest-thing-only design. Could it work on GTX Nvidia hardware, AMD hardware etc... perhaps with lesser efficiency (or with small modifications to design)? Would it be directly applicable to upcoming AMD RDNA2 cards this year, and the XSX / PS5 powered by them both said to offer hardware raytracing support?

Ultimately though , a hardware/platform agnostic, preferably open source/spec way to do what is described here would be of value. However, we don't need yet more walled gardens and Nvidia seems to love creating them as well as adding more walls when they are able.
 
Discord actually launched a beta of their own software based noise cancellation. My friends and I were messing around with it the other day, and it's honestly pretty impressive. It does change the way people sound slightly (voices sound a bit deeper imo), but it does an amazing job at cancelling out noise. I was able to talk while spamming my keys excessively hard, and my friends said they were only able to hear my voice. There's a toggle for it in Discord's voice settings.

https://support.discordapp.com/hc/en-us/articles/360040843952
 
Last edited:
In theory the idea isn't bad, provided that it both takes place locally (ie on your system entirely) regarding the AI and perhaps more importantly does not send out nor leak data to others save for a reasonable amount of telemetry that can be easily opted-in if desired.

That said, in practice this is more Nvidia proprietary garbage just like everything from GSync, to PhysX, CUDA, and the myriad of other projects Nvidia seems to prefer. The fact this only works on not only Nvidia hardware, but exclusively RTX cards is curious - I wonder how much of this is a technical necessity vs an intentional latest-thing-only design. Could it work on GTX Nvidia hardware, AMD hardware etc... perhaps with lesser efficiency (or with small modifications to design)? Would it be directly applicable to upcoming AMD RDNA2 cards this year, and the XSX / PS5 powered by them both said to offer hardware raytracing support?

Ultimately though , a hardware/platform agnostic, preferably open source/spec way to do what is described here would be of value. However, we don't need yet more walled gardens and Nvidia seems to love creating them as well as adding more walls when they are able.

Most likely they were researching new ways they could use RTX cores. It's probably a core part of the design.
 
  • Like
Reactions: Auer
like this
I tried it in Discord last nite, works really well. I like the sound quality of the RTX version better than the Discord (Krisp) version.

If you set it to filter output devices as well as well, it actually removes background music from narrated youtube videos. Basically everything but a human voice so dont go gaming with that option on haha.

Screenshot - 4_20_2020 , 10_42_25.png
 
Oh I can see the IT requests coming in now. Everyone will "need" a new laptop or video card with a RTX card now otherwise there is no way they could possibly concentrate on Zoom calls now.
 
In theory the idea isn't bad, provided that it both takes place locally (ie on your system entirely) regarding the AI and perhaps more importantly does not send out nor leak data to others save for a reasonable amount of telemetry that can be easily opted-in if desired.

That said, in practice this is more Nvidia proprietary garbage just like everything from GSync, to PhysX, CUDA, and the myriad of other projects Nvidia seems to prefer. The fact this only works on not only Nvidia hardware, but exclusively RTX cards is curious - I wonder how much of this is a technical necessity vs an intentional latest-thing-only design. Could it work on GTX Nvidia hardware, AMD hardware etc... perhaps with lesser efficiency (or with small modifications to design)? Would it be directly applicable to upcoming AMD RDNA2 cards this year, and the XSX / PS5 powered by them both said to offer hardware raytracing support?

Ultimately though , a hardware/platform agnostic, preferably open source/spec way to do what is described here would be of value. However, we don't need yet more walled gardens and Nvidia seems to love creating them as well as adding more walls when they are able.

Totally useless rant. Why wouldn't nVidia create exclusive technology for its users? It would be foolish to open source this. It works with some of the most popular programs and works quite well.
 
Totally useless rant. Why wouldn't nVidia create exclusive technology for its users? It would be foolish to open source this. It works with some of the most popular programs and works quite well.

Oh, Bravo! Well, played, sir, a capital jest!
 
Discord actually launched a beta of their own software based noise cancellation. My friends and I were messing around with it the other day, and it's honestly pretty impressive. It does change the way people sound slightly (voices sound a bit deeper imo), but it does an amazing job at cancelling out noise. I was able to talk while spamming my keys excessively hard, and my friends said they were only able to hear my voice. There's a toggle for it in Discord's voice settings.

https://support.discordapp.com/hc/en-us/articles/360040843952
I just started using this and thank you my guildmates hate my cherry mx blues and this works wonders!
 
  • Like
Reactions: Auer
like this
I tried it in Discord last nite, works really well. I like the sound quality of the RTX version better than the Discord (Krisp) version.

If you set it to filter output devices as well as well, it actually removes background music from narrated youtube videos. Basically everything but a human voice so dont go gaming with that option on haha.

Did you notice any impact in game or since these are the RT cores at work does it not matter?
 
Did you notice any impact in game or since these are the RT cores at work does it not matter?

Since the RT cores do absolutely nothing in most gaming use cases, it shouldn't affect the vast majority of games.
 
  • Like
Reactions: Auer
like this
I just started using this and thank you my guildmates hate my cherry mx blues and this works wonders!

Just remember you’ll have to turn it on every time you start Discord. It will default to off if you close the program and reopen it.
 
Since the RT cores do absolutely nothing in most gaming use cases, it shouldn't affect the vast majority of games.
Did you notice any impact in game or since these are the RT cores at work does it not matter?
Are the RT cores even used by this? It seems to be an AI setup, which can be done on shaders (no activity reported there) or tensor cores (which I cannot trace right now, but I'd assume these are what's being used). Nvidia even has a site to upload voice samples to help train their network.

https://broadcast.nvidia.com/feedback?sdk=voice
 
Did you notice any impact in game or since these are the RT cores at work does it not matter?

I havent seen any performance diff.
BUT...Idle temp is up as this forces the GPU to stay at base boost as long as RTX Voice is enabled. Sorta like the "Power Management Mode" set at "Maximum Performance" in the NCP.
Not that big a deal for me, but maybe to some. 33C vs 43C Idle temp diff for my RTX2070. No temp diff under load that I can tell.
 
Has anyone tried it with multiple GPUs yet? I wonder if there is a way to set which GPU it uses like when you configure PhysX. I don't want a 10% hit to my framerate so it would be awesome if I didn't have to use my main GPU.
 
I'm running it on my work laptop with a Quadro T1000 and it works great for Teams calls. I can be clicking my mouse and pounding my keyboard getting actual work done during meetings and no one hears it or all the other loud background noise going on in my house.
 
I'm running it on my work laptop with a Quadro T1000 and it works great for Teams calls. I can be clicking my mouse and pounding my keyboard getting actual work done during meetings and no one hears it or all the other loud background noise going on in my house.

Thanks for the feedback and impressions as a user. this is great insight for folks who haven't tried it out
 
Installed on VR streaming pc and main pc, one with a 1080ti and one with a 1080. Works great on both, don't see more than 5% gpu usage with it running. The only thing I've really noticed is if you leave it enabled in the background it keeps the GPU clocked up to non turbo 3d speeds.

I have a small mp3 of my first try with it here.
 
Back
Top