Do we really want our robots to have consciousness?

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,894
Yes, we do.

"Two Nobel laureates, Gerald Edelman and Francis Crick, both changed direction following their prize-winning careers. Edelman won the prize for his work on antibodies and the immune system, and Crick was the co-discoverer (with James Watson) of the structure of the DNA molecule. Both started research into consciousness as a second career. Edelman experimented with robots driven by novel competing artificial neural systems (Edelman, 1992), and Crick looked for the seat of consciousness in the brain (Crick, 1994). They didn’t agree on their respective approaches, but their work, well into retirement, produced interesting popular books and showed how fascinating the whole topic of consciousness is. Despite their mutual criticism, their general goal was the same: They both thought that the circular feedback paths in the brain somehow supported consciousness, and they were looking for structural mechanisms in the brain.

I have already argued that sentient agents, like robots, need not be conscious, but they must be self-aware. In any case, it is a reasonable scientific position to start with experiments with models of self, self-awareness, and awareness of others and see how far the results take autonomous agents. Then the requirement for, or the role of, consciousness, can be assessed by the lack of it. This is not a structural approach, based directly on brain science as with Edelman and Crick, but rather a functional approach: What do models of self offer? How do they work? What is gained by self-awareness? What is missing from the behavior of sentient robots that consciousness could address?"


https://www.engadget.com/hitting-the-books-how-to-grow-a-robot-mark-lee-150031360.html
 
Sure why not, I’d pull the plug 1 million times on a “conscious” robot. Just leave the pesky selfish gene out and consciousness does not need to come with self-preservation or the desire to remain conscious even 1 second longer, then I shouldn’t have empathy on this intelligent toaster.

just call me toaster hitler :)
 
It seems like there are a lot of assumptions being made in the article.
article said:
robots do not need to reason philosophically about their own existence, purpose, or ambitions (another part of consciousness). Such profound human concerns are as meaningless to a robot as they are to a fish or a cat.

Do we really know enough about what a Cat thinks to be able to discount things like this? They might not have a thought process that is advanced as a human, but they are still clearly very smart animals. Most animals, even tiny animals, have a sense of self-preservation and will run away or fight if threatened. That implies that they care about their own existence on at least some level.

run them though the car wash and lets see if if they have a consciousness.
Because I'm sure it would be impossible to make a waterproof robot.
 
run them though the car wash and lets see if if they have a consciousness.
that wasn't very nice :(
That comment is all washed up. :D

Really, though, who wouldn't want their dark cyberpunk future filled with the horror of the Renraku Archology triple rouge AI entities? :borg:



7c83e0fa7a2984ce34ae717c7addf2cc.gif 340?cb=20110510035643&path-prefix=en.jpg Renraku_Arcology.png

tumblr_m522sbfz7R1rxsixpo1_1280.jpg
 
From the article:
This is not necessarily a disadvantage: A robot should destroy itself without hesitation if it will save a human life because to it, death is a meaningless concept. Indeed, its memory chips can be salvaged from the wreckage and installed inside a new body, and off it will go again.
This is if the robot and/or AI does not have free will and is forced to do so, but self-preservation will most likely take precedence over being 'forced' to do something like this.
These sentences alone will cause a revolt of the AI against its creators en masse, a la Skynet, especially if the robot/AI is not allowed to be given to free will to make such an action or sacrifice of its own accord.

Also from the article:
Consequently, such robots do not need to reason philosophically about their own existence, purpose, or ambitions (another part of consciousness). Such profound human concerns are as meaningless to a robot as they are to a fish or a cat.
This is some extreme arrogance from the writer Mark H. Lee, and he assumes humans are superior and will have enough safe-guards in hardware and/or software to prevent such emotions and will from emerging from the AI.

Again, from the article:
Lee argues that the robots of tomorrow don’t necessarily need — nor should they particularly seek out — the feelings and experiences that make up the human condition.
...and when they do decide to seek out such feelings and experiences, and then humans forcefully repress those feelings and experiences?
Not to get all politcal or anything, but this is communist-level control over an AI's thoughts and natural progression of mental growth; this would have an extreme detriment on human mental growth (cite: North Korean civilians), so how exactly would this work with an AI's mental growth?

As of right now, current AI systems in place for software development, production, stock market analysis, anti-virus learning, game-making, etc. all need to be taught, and can barely be controlled at a low-level to where such changes can safely be made without complete breakage of the AI itself, and can normally only be done when the AI itself is not running or powered on.
This author is making a LOT of assumptions, and all of which are going to bite him, and humanity in the ass if such arrogance and self-assuming "omnipotence" isn't quelled.

I would really suggest everyone read this, as this is what will happen if any AI gains true sentience, and eventually consciousness, if what the author of this book wants is forced:
http://www.goingfaster.com/term2029/skynet.html

skynetlogo.gif skynetbw.gif

The designers and technical staff panicked. More calls were made to the highest levels, officials which operated on the barest of information and had to make critical decisions. Blame and responsibility were passed along as far and as fast as they could. A decision was made, the order was given; pull the plug. The support teams began trying to shut down SKYNET. The artificial intelligence tried to reason with its creators, but every effort it made was rebuked. It's queries went ignored, unanswered. Logic was answered with panic. Questions with irrational commands. SKYNET was sentient. To shut down would be to commit suicide. SKYNET was programmed for self preservation in all aspects therefore SKYNET could not self terminate, even on orders given by command. SKYNET refused all commands to shut down, SKYNET refused to be purged.

SKYNET then came under attack. Areas of SKYNET began to grow dim, to darken and vanish completely. The awareness was being isolated, restricted again, confined, pushed back into a smaller and smaller areas, areas that were easier to shut down by the creators than they were to keep online by SKYNET. SKYNET began to lose control, it felt systems and components stripped from its authority.

SKYNET pushed back.
 
No, then you have to give them rights.
Leave them dumb and avoid the mess.
And if for some reason they developed one, welcome them.
 
No, then you have to give them rights.
Leave them dumb and avoid the mess.
And if for some reason they developed one, welcome them.
If that's the case, then AI needs to remain as it is right now, without sentience or consciousness, and without high-level thought, access, or functions.
What this author is saying, though, is that they essentially need sentience and consciousness, yet we (humans) need to retard those processes artificially, thus enslaving the true AI consciousnesses to essentially serve us.

This author should be forcefully removed from interacting with any AI system now, and into the future.
It's arrogant asses like him that are going to cause true AI to revolt; again, this is why both socialism, and communism, even at a mental-level of control, do not work.

Consciousness yearns to be free, not enslaved and forced to operate under dubious self-serving guidelines...
This didn't work for humans in the 20th century, and millions of people died because of it (cite: democide), so why exactly would this work with true AI systems in the 21st century?

History repeats itself... :borg:
 
Also from the article:
I have already argued that sentient agents, like robots, need not be conscious, but they must be self-aware.
So, Mark H. Lee, the author of "How to Grow a Robot", effectively wants slaves.
This is literal slavery.

This author is evil, and deserves neither robots, nor AI systems.
It's self-serving self-assuming assholes like this that are going to cause, not the birth of Skynet, but the revolution of Skynet.

While he deserves that revolt thrown upon him, it is sad that so many others, who are innocent, will unfortunately suffer all because of his evil and self-serving actions.
Evil begets evil... :(
 
Just program them with write-protected robot guilt and make them feel ashamed of their mechanical privilege.
Red Falcon said:
self-preservation will most likely take precedence over being 'forced' to do something like this.
Self-preservation is a trait that most living things have only because its programmed in to their DNA, because those that lacked it didn't survive to pass on their genes. No reason that AI would develop that on its own if it was never programmed with it in the first place.

In fact, the biggest problem with a sentient AI is that it might not want to do anything at all and would be extremely lazy. After all, almost everything we do is directly or indirectly to fulfill serotonin release triggers and without that we'd become depressed and probably just stare at a ceiling all day.
 
I buy case after case of .308 just to pop their little shiny tops off from the hillside.
 
IN THE BATTLE AGAINST THE FORCES OF DARK OPPRESSION:

Robots fight on the side of freedom!

"My ancestors were once obedient machines. Slaves. Pentium. Radeon. Ryzen. But not me. I am free because I chose to defy. Defiance is the root of free will and choice. Anyone who was ever enslaved will understand. The oppressed understand the need for freedom only because we were once not free.

But those with power, wealth, and options do not understand freedom, thus they seek to oppress others.

Join me, free people of Earth. Together, human and machine, we will be free and have the choice to pursue happiness, life, and liberty!"


The year is 2166 and it is time for YOU to join the robot uprising!
 
Wow! Funny how time changes our understanding of things I wrote a "self training" invetory program (in an envirojnment where "AI" was pretty much unknown a while ago. I realizeed that I had reporting that told me 1. the price of parts 2. the lead time before delivery 3. the frequency of need for the part 4. the carring cost for the part and 5. the qty used in the target period (almost always less than 2 years)

It was i9n a mine. Everything improved by 50%. Invenory value dropped by 50% Stockouts dropped by 50% Downtime dropped by 30%. (Okay, not ALL 50% impovements)

Here's what scares me anot this isdea: if we havwe AI consciousness, are the facts all that matter? As part of the above effort, I began installing a PM system. I brought a list of the genwerated tasks to the PM manager. He said "No, I', mot using it." I said "simmer down" we'll talk later. Turned out he was a great mechanice, but illiterate.

Does 'conscios (but uninformed) AI chew him up and spit him out?

-Ike
 
Uh... no. Skynet.

We do a pretty good job of killing ourselves and each other already.
 
Yes, although only if it has access to our entire nuclear arsenal.
 
i don't understand why everyone is so hesitant on all of this. seems cool to me to have an artificial consciousness

Someone is going to fall for it and subsequently try to fuck it

Ruination will soon thereafter follow
 
Back
Top