This Artificial Intelligence is Designed to be Mentally Unstable

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,554
Ok, first we hear AI is going to destroy us all. Next, we hear AI is just going to enhance us. Now this piece says we are designing some AI's to be mentally unstable. Is this something that's really smart to do? I'm starting to lean back to the opinion that AI is going to destroy us if this is the kind of thing we're going to do to AI.

"At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania," Thaler says. "At the other, we have reduced cognitive flow and depression." This process is illustrated by DABUS's artistic output, which creates progressively more bizarre and surreal images.
 
Neal Stephenson really sees the future

http://vanemden.com/books/neals/jipi.html

Jipi sees the answer immediately, but it takes several minutes to make herself believe it’s possible. She gets pretty involved with thinking about this, and eventually realizes that several minutes have gone by, during which Mr. Cardoza has fielded a couple of important phone calls, and she has at some point raised both hands up to the top of her head and begun massaging her scalp, tracing those little meandering crevices between the plates of her skull. “Uh,” she finally says, “so you’re telling me that they created software to simulate the thought processes of paranoid schizophrenics?”

“Remember,” Mr. Cardoza says, “that some of the conversations, in this experiment, were between normal humans and paranoid schizophrenics. Others were between normal humans and software daemons. At the end of each conversation—” he starts flourishing his pen at the whiteboard’s question marks.

“The normal human would have to give an opinion as to whether the entity he’d just been conversing with was a paranoid schizophrenic or an evolver.”

“Yes.”

“So, if you hooked up the experiment in the right way—”

“If you killed the evolvers that were easily recognized as evolvers, and allowed the ones who seemed, to normal humans, like paranoid schizophrenics, to reproduce—”

“Eventually,” Jipi says, “you’d evolve some software that behaved, on the Net, just like a paranoid schizophrenic.”
 
If there's one thing humans do well it's mental instability. Let's keep that code well contained, eh? Competition is not good in this case.
 
On one hand I could see this being dangerous. On the other hand, if we're pushing harder into AI, it's probably best to have a good understanding of things like this as well. What happens when rogue groups have the ability to put together an unstable AI just for the hell of it? It would probably be good to have dealt with them before at some level. Just hypothesizing here, but it seems like we should have a grasp on ALL aspects of this if we're going to bother at all.
 
So? What's wrong with those images??? I don't see anything wrong with them. What's the problem here?
 
I can see Steven Hawking definitely having a class "A" shitfit over this :D
 
Back
Top