paradoxical
Limp Gawd
- Joined
- May 5, 2013
- Messages
- 320
I'll let someone else clue you in to the multiple hilarious things in your above post since you'll discount it if I do. Are you drinking tonight, by chance?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
16gb of ram have been quite standard for more than 10 years now, but a lot of stuff could take quite modest resource. For phone it would push model (I think only some rare Android goes that far)Lots of devices with NPUs that creates opportunity to run AI software on client, for example: auto-correct
But if the ML models require lots of memory, then you may need 16gb/32gb ram etc. Could send DRAM prices soaring.
That’s where NVidia’s work with Int4 becomes huge, CUDA and its Int 4 inference AI models use 70% less memory than that same model on OpenML FP16, while being multiple times faster.Lots of devices with NPUs that creates opportunity to run AI software on client, for example: auto-correct
But if the ML models require lots of memory, then you may need 16gb/32gb ram etc. Could send DRAM prices soaring.
Not the hero we asked for, but the hero we need.I use AI to add subtitles to foreign porn.
I watch for the story /s
Modern problems need modern solutions.I use AI to add subtitles to foreign porn.
I watch for the story /s
Jevon's Paradox. Only thing I have seen beat it was LEDs.Our species does seem to have a really good track record of finding innovative ways to get rid of jobs, yet requiring more to be created as a result. We’ve been innovating people’s jobs out of existence for thousands of years. Yet we still all have jobs. Maybe this time will be different.