Meet Norman; The World's First Psychopath AI

rgMekanic

[H]ard|News
Joined
May 13, 2013
Messages
6,943
A team of MIT researchers have created a new AI, one that happens to be a psychopath. Researchers set out to prove that the data that is used to teach a machine learning algorithm can significantly influence its behavior. Norman was trained to perform image captioning, however instead of learning on a standard image captioning dataset, Norman was trained on an unnamed subreddit that is dedicated to document and observe death. Norman' responses were then compared with responses with a standard image captioning AI on Rorschach inkblots.

What I take from this is all AI will be inherently bias depending on the dataset used to train it. Unless someone comes up with a completely neutral dataset, or somehow can provide the AI with all possible data, there will always be influence into the AIs "thinking." Interesting, and scary stuff to ponder.

Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.
 
This is something we should totally keep doing... Nothing like making angry AI! Though I understand the research reasoning.
 
Looking at some of the examples of his captioning, isn't it more an example of being morbid and not psychopathic?
 
I've got a pretty screwed up head, but I honestly can't see what Norman sees. It looks like a bad AI that defaults to making random morbid statements than anything interesting...
 
I've got a pretty screwed up head, but I honestly can't see what Norman sees. It looks like a bad AI that defaults to making random morbid statements than anything interesting...

I've seen a few AI stuff that tends to parrot, so I wouldn't be doubtful if it's not just parroting phrases - a little here and a little there, presto a new caption.

For example, in this one:

“man is shot dead in front of his screaming wife.”

The AI needs to know what A) a man looks like, B) a woman looks like, C) presumably a gun, and D) the context of being shot and someone screaming (in a photo).

Humans doing the Rorschach test will fall back to their experiences, aided by their personality traits - via real, or imaginary (movies, games), to see something in an inkblot. If the AI doesn't know any of this, then it's just tying random features of the inkblot to randomly made phrases.

The same applies to the "normal" AI that was trained - does it actually know what a small bird looks like, or is it just parroting from the dataset captions?
 
Last edited:
Looking at some of the examples of his captioning, isn't it more an example of being morbid and not psychopathic?

that is kind of what i thought. so it normally sees something about a dead person.

personally i noticed a few people getting pulled apart, people screaming while on fire, and a few demons. I think that is more psychopathic. although that might also just mean that i need help just as much if not more than the AI lol
 
I've seen a few AI stuff that tends to parrot, so I wouldn't be doubtful if it's not just parroting phrases - a little here and a little there, presto a new caption.

For example, in this one:

“man is shot dead in front of his screaming wife.”

The AI needs to know what A) a man looks like, B) a woman looks like, C) presumably a gun, and D) the context of being shot and someone screaming (in a photo).

Humans doing the Rorschach test will fall back to their experiences, aided by their personality traits - via real, or imaginary (movies, games), to see something in an inkblot. If the AI doesn't know any of this, then it's just tying random features of the inkblot to randomly made phrases.

The same applies to the "normal" AI that was trained - does it actually know what a small bird looks like, or is it just parroting from the dataset captions?
Yep - standard AI is given a bucket of normal language to pull words out of and use
This publicity stunt AI was given a bucket of horror film subtitles (effectively)
 
My first thought was it was an Earth Golem. And image #5 on TFA's list was Mothra. Not sure my input will help their AIs out much.
 
This shit isn't AI, it's machine learning. If you feed it data and it makes a deterministic decision, how is that data any different than actual code?
 
Last edited:
As far as I can tell, it is effectively an electronic Cards Against Humanity but with only like 15 cards. I wonder what the grant size was that paid for this thing.
 
I get this

404 ERROR: REQUEST COULD NOT BE FOUND
The page that you have requested could not be found at this time. We have provided you a list of related content below or you can use our site search to find the information that you are looking for.

Meet Norman; The World's First Psychopath AI[/paste:font]
A team of MIT researchers have created a new AI, one that happens to be a psychopath. Researchers set out to prove that the data that is used to teach a machine learning algorithm can significantly influence its behavior. Norman was trained to perform image captioning, however instead of learning ...

Posted by rgmekanic June 07, 2018 4:13 PM (CDT)
 
My gawd! We are not in a big enough hurry to build machines to KILL US ALL! We have to resort to giving them the mindset that is it okay to do so!

I can see the start of it all.

Newsblurb: MIT accidently leaks an AI psychopath into the wild today! It immediately set about taking control of all autonomous cars and driving them over cliffs, into barriers, and through malls!

WAKE UP PEOPLE!

<snuggles firmly into tinfoil hat>
 
So they learned the system a library of images of death scenes and compared it to a different system that learned on less morbid images. And the system trained on morbid images gives morbid suggestions. Pretty much a non-story.
 
So they learned the system a library of images of death scenes and compared it to a different system that learned on less morbid images. And the system trained on morbid images gives morbid suggestions. Pretty much a non-story.

This. They mean to tell us that an algorithm designed to learn on past experience exhibits different behavior when shown an intentionally biased data set to build a model. Wow, who would have guessed that!

The first law of Comp Sci still applies: Garbage in, Garbage out.
 
thats not psychopathic in any way shape or form. thats simply giving it a limited source of information and then saying fit this into what you know of the world..... which does NOT make it psychopathic in the slightest.
 
Back
Top