Following up on the "Perspective" hate speech filtering experiment from 2017, one Alphabet's subsidiaries, Jigsaw, recently released a machine learning-powered tool designed to filter out "toxic" comments on high traffic sites. Out of curiosity, I downloaded the extension in a fresh Chrome install, and found that it features a virtual nob that lets users tune the "volume" of the comments sections in YouTube, Facebook, Twitter, Reddit, and Disqus comment sections. Twisting the knob gradually filters out more and more comments in real time. As the developers note, it definitely misses some nasty comments while hiding other comments that aren't particularly "toxic" at all, but based on my quick test with some controversial YouTube videos, the sheer variety of language it can seemingly interpret is remarkable. The machine learning powering Tune is experimental. It still misses some toxic comments and incorrectly hides some non-toxic comments. We're constantly working to improve the underlying technology, and users can easily give feedback right in the tool to help us improve our algorithms. Tune isn't meant to be a solution for direct targets of harassment (for whom seeing direct threats can be vital for their safety), nor is Tune a solution for all toxicity. Rather, it's an experiment to show people how machine learning technology can create new ways to empower people as they read discussions online.