Uncensored A.I. chat software that you can run locally on your PC

ZodaEX

Supreme [H]ardness
Joined
Sep 17, 2004
Messages
4,902
Has anyone here tried one of these local uncensored A.I. chat models? I'm seeing a few pop up online, but am curious if it's worth the effort to set one up yet or if it may need a year or two more to advance.
 
It depends on your hardware and what you are looking to get out of it. There is quite a bit of discussion here: https://www.reddit.com/r/LocalLLaMA/
Coding assistants are popular, role playing chats, and there are now options to read local files as a backend dataset.

The largest factor right now is video card RAM (bonus points if you have a newer Apple M2 device since it has unified video memory).
The more GPU memory you have the faster the LLMs can process and the larger (e.g. smarter) the models you can use.

I use this single executable program to run models locally: https://github.com/LostRuins/koboldcpp
A typical model is anywhere between 8-40GB.

Nvidia just dropped this but no idea how good it is: https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generative-ai/
It looks like it is based off Mistral 7B which is censored but I would be surprised if people get it to run other models too.
 
Nvidia just dropped this but no idea how good it is
Just tried it, it can read your local doc, txt, pdf and is giving pretty good answer, but it seem my 3070 8GB vram is close to the limit and was not impressed by the speed of response at all on my system (not faster than bingGPT at all, even slower, make sense compute is much bigger for this than the little text to send on fast Internet), it was using almost 100% of the gpu resource.
 
Last edited:
I have a 3070TI and things ran decent enough. I threw in some old DND 5E books and it was able to pull up items from stat blocks and tables decently enough. The non-OCR book was a bit of a struggle. Might be a good lookup helper for a DM.
I also have 64GB of ram and it is a matter of trading speed for quality. I can technically run a 70B but it becomes an exercise of patience.
 
From the looks of it I think the new NVIDIA thing is basically just an interface they made for the same models you would get from hugging face. I haven't actually tried it though. I've used this https://github.com/oobabooga/text-generation-webui which works pretty well. They can work pretty well depending on the models you download. I only really just fool around with them a bit. I never load documents and use it as a real assistant.

If you want any privacy at all running them on a machine at home will be the way to go in the future as they become more useful. There are already companies building machines just to run AI at home.
 
Back
Top