The last time I needed quad+ precision numbers, I translated an arbitrary precision library. It worked great, except it was extremely slow.There's a trick to doing that using two 64-bit floats. Effectively you approximate with one float and use the other for storing the error.
Ironically within the past 10 years there's been more demand for lesser precision floating point hardware. GPUs have added 16-bit and even 8-bit float support. Mainly because they don't take up much space and you can pack a lot of them on a die. Neural networks don't need crazy high precision floats, but training LLMs requires tons of them.
128-bit floats are a nice idea, but not exactly in demand right now.
Not something I use every day, but when you need it, you need it.