Facebook Using AI to Scan For Suicidal Posts

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,500
Facebook has rolled out a new AI that scans for suicidal posts across their platform, excluding the EU because of privacy laws. Supposedly this will find something before a person reports a post and will then alert a moderator that will contact first responders. All of this sounds innocent enough, but what happens when they start using AI to scan for other types of comments? I think this just shows that big brother is watching us more and more all of the time. Just say no to putting your personal thoughts out there.

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go to far.
 
If the plan is to find and delete, that may be a bad idea. When someone posts about contemplating suicide, there is often an internal conflict which is calling out for help from friends and family. If they truly wanted to die and have no internal conflict, they are less likely to announce it.
 
"Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his own cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of Earth. This very kindness stings with intolerable insult. To be "cured" against one's will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classified with infants, imbeciles, and domestic animals." - C.S. Lewis
 
As long as the first responders show up with guns drawn ready to off someone in the name of officer safety, I'm sure it will all work out in the end.
 
"Start using AI to scan for other types of comments"? You think they haven't? Of course they are currently actively doing exactly this, and using it to determine what advertisements to push at you.
 
  • Like
Reactions: Madoc
like this
You guys are being especially snarky today. If this works like it should, its a good thing. That's a big if. If someone posts something that will trigger this on Facebook, it is the same as writing it in a diary, or making a weird phone call, or making strange comments in person, its someone reaching out for help but not actually asking for help.

I am no expert, but there are phrases and ways of speaking, or writing, that indicate that the person is seriously considering suicide and low-key wants someone to help them. If your / their mind is already made up, they may not post on Facebook anyway and just do the deed with whatever method they had planned.

Facebook is ALREADY super-analyzing EVERYTHING you post, from mood to energy level. They know how long you look at certain posts and everything they do is designed to keep you in the system longer. That's just a fact. If they can do a small bit of good with it as well, then great.
 
I am no expert, but there are phrases and ways of speaking, or writing, that indicate that the person is seriously considering suicide and low-key wants someone to help them. If your / their mind is already made up, they may not post on Facebook anyway and just do the deed with whatever method they had planned.

As someone that works in a field that delves into mental health, I can confirm that there is almost always signs leading up to someone attempting self harm days and even weeks in advance. They always show/tell these changes to people, including on social media, which over the years has climbed up the list of where it's more likely to occur. If social media can program AI to detect these changes and the way they often speak, this will save countless lives.
 
Will Facebook get sued if they don't catch a notice? Blatant? Hey if they say that is part of their service/interface - how much liability will they have? Also if they have false positives leading to first responders responding over and over again for nothing - can they be charged? What records will be kept on a person and how available especially for false or inaccurate reports? Who would have access? As in employers on a background check? I know our company background checks include Facebook accounts.

How would you protect against something like " I rather take a gun to my head than . . ." "Jumping off the bridge here . . ." Just expressions that can have zero bearing to a suicidal thought but could be flagged and now you have folks knocking on your door asking who you are, how are you, why you commented . . .

Something like this implemented would be real easy to scan for terrorist type communications. Still in the end one needs to know what they are signing up for to begin with and stay on top of any changes and decide if it is worth it.
 
They can't catch fake news, what makes them think they can all the sudden catch suicides. How about instead of this, allow me to block keywords from my news feed instead. Give me something I actually want.
 
As long as the first responders show up with guns drawn ready to off someone in the name of officer safety, I'm sure it will all work out in the end.

Ideally the AI will be sufficiently advanced enough at that point to automatically make a "1 like = 1 prayer" post after the officers waste the family chihuahua.
 
Back
Top