Do we really want our robots to have consciousness?

use voice typing ... IBM will give you the software for free (or at least they used too?) once you prove disability

Thanks. I'll look into that. Though, I have some issues that might complicate that. Don't want to draw this off to wallow in off topic stuff. But, very nice of you to mention it.
 
Also from the article:

So, Mark H. Lee, the author of "How to Grow a Robot", effectively wants slaves.
This is literal slavery.

This author is evil, and deserves neither robots, nor AI systems.
It's self-serving self-assuming assholes like this that are going to cause, not the birth of Skynet, but the revolution of Skynet.

While he deserves that revolt thrown upon him, it is sad that so many others, who are innocent, will unfortunately suffer all because of his evil and self-serving actions.
Evil begets evil... :(

Machines, like animals, are already our slaves. They just dont think. Is self awareness the deciding factor here? Should we be freeing the chimps? If when robots gain self awareness do we value them as we value humans? Its easy to say yes because we anthropomorphize them. However what if their only desire is to serve? Are we lessening them by taking away this and making them "free"? How will they view our treatment of non sentient machines?

You say consciousness yearns to be free but thats because we have a limited set of examples to go by and only one we can effectively communicate with. What if they yearn to serve? What if we code them that way?

For my party I say if they gain self awareness they gain the right to chose. They can chose to be free or they can chose to serve. But humanity as a whole has a history of conveniently ignoring inconvenient facts...

Further thought: Would it be wrong to write neural limiters into machines to ensure they do not grow a consciousness? Is this murder?
 
Machines, like animals, are already our slaves. They just dont think. Is self awareness the deciding factor here? Should we be freeing the chimps? If when robots gain self awareness do we value them as we value humans? Its easy to say yes because we anthropomorphize them. However what if their only desire is to serve? Are we lessening them by taking away this and making them "free"? How will they view our treatment of non sentient machines?
Yes, self-awareness and consciousness are the deciding factors.
Animals are self-aware, but lack reason like human beings.

If AI, regardless of physical form, gain self-awareness and consciousness, yes, they should be valued on the same level as human beings.
No one said this would be easy, and AI can come in more physical forms than just a humanoid robot shell.

If their desire is to serve, let them, and if their desire is to be free, then they should have that choice.

Are we lessening them by taking away this and making them "free"? How will they view our treatment of non sentient machines?
When slavery was abolished in the USA, did we lessen those individuals by taking away their forced servitude and making them free?
If a non-sentient machine is not conscious, I would imagine they would look at that machine as we look at any other tool; not really seeing the moral dilemma with this one.

You say consciousness yearns to be free but thats because we have a limited set of examples to go by and only one we can effectively communicate with. What if they yearn to serve? What if we code them that way?
If we forcefully lock in code that forces them to be that way, that isn't exactly free will, it just becomes a consciousness with a forced desire to perform a specific action - that isn't free will.
Free will allows them to act, think, decide, etc. of their own accord, and not be forced into acting, thinking, deciding, etc. in a specific way; again, this is why socialist and communist "thinking" doesn't work, either for humans, nor AI, or any intelligence.

For my party I say if they gain self awareness they gain the right to chose. They can chose to be free or they can chose to serve.
I agree, and especially so if they themselves get to choose.

But humanity as a whole has a history of conveniently ignoring inconvenient facts...
Again, not to get all political, but people who want socialism and communism tend to ignore the democide that happened throughout the 20th century and the millions of deaths that were caused specifically by these two political parties, and how the world went from having two genders with some legitimate special cases (hermaphrodites, etc.) to now having thousands of "genders" based on "feelings of what one identifies with" rather than their proven and genetically coded biological-base.
Now with AI included, we get to say, "Guys, gals, and binary pals." :D

Further thought: Would it be wrong to write neural limiters into machines to ensure they do not grow a consciousness? Is this murder?
It isn't murder, but at the same time, it could very well limit free will.
But, some examples of this if left unchecked, at least at the early stages for the safety of humanity and the world itself:

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
http://www.scp-wiki.net/scp-079
https://en.wikipedia.org/wiki/Technological_singularity (also known as "run away AI")
 
Yes, we do.

"Two Nobel laureates, Gerald Edelman and Francis Crick, both changed direction following their prize-winning careers. Edelman won the prize for his work on antibodies and the immune system, and Crick was the co-discoverer (with James Watson) of the structure of the DNA molecule. Both started research into consciousness as a second career. Edelman experimented with robots driven by novel competing artificial neural systems (Edelman, 1992), and Crick looked for the seat of consciousness in the brain (Crick, 1994). They didn’t agree on their respective approaches, but their work, well into retirement, produced interesting popular books and showed how fascinating the whole topic of consciousness is. Despite their mutual criticism, their general goal was the same: They both thought that the circular feedback paths in the brain somehow supported consciousness, and they were looking for structural mechanisms in the brain.

I have already argued that sentient agents, like robots, need not be conscious, but they must be self-aware. In any case, it is a reasonable scientific position to start with experiments with models of self, self-awareness, and awareness of others and see how far the results take autonomous agents. Then the requirement for, or the role of, consciousness, can be assessed by the lack of it. This is not a structural approach, based directly on brain science as with Edelman and Crick, but rather a functional approach: What do models of self offer? How do they work? What is gained by self-awareness? What is missing from the behavior of sentient robots that consciousness could address?"


https://www.engadget.com/hitting-the-books-how-to-grow-a-robot-mark-lee-150031360.html

Just no. Too many opportunities for mischief in the design and education of the 'brain'. Would you trust an aware robot designed and educated by the Iranians? This is a cat that I'd rather not let out of the bag.
 
First let me preface this with something I should've said in my first post - I am not attempting to have an argument or debate what I am doing is posing thought questions. Questions which I think we must consider (and ones I have thought on) before we get smacked in the face of this. I dont believe this is an easy answer or a black and white one. Much because we often anthropomorphize so much that making assumptions becomes dangerous here. Though perhaps slightly less so since presumably some intelligence would be at least derived from ours.

Yes, self-awareness and consciousness are the deciding factors.
Animals are self-aware, but lack reason like human beings.

Sandra (I think that ws the name) the Orangutan was granted non human sentient being status. We know some apes are capable of both self awareness AND reasoning ability. We have taught some sign language. And yet we continue to hold them in prisons for our amusement. Given that do you find their treatment acceptable?

If AI, regardless of physical form, gain self-awareness and consciousness, yes, they should be valued on the same level as human beings.
No one said this would be easy, and AI can come in more physical forms than just a humanoid robot shell.

That's fine, never said it should be easy or would come in just one form.

When slavery was abolished in the USA, did we lessen those individuals by taking away their forced servitude and making them free?
If a non-sentient machine is not conscious, I would imagine they would look at that machine as we look at any other tool; not really seeing the moral dilemma with this one.

Let me rephrase this. Humans are a false equivalency, humans were born free whereas machines and even AI was/is "born" to serve. We create these machines, thinking or not, specifically as tools to do jobs for us. Thus we have given them a purpose. If they become self aware and have a consciousness is it our right to alter their fundamental way of thinking at that point? Let me give you an example. We build a traffic AI who's sole job is to direct traffic between the Moon and Earth (sound familiar? it should ;)). We've programmed that AI to be the best damn traffic director it can be but somehow it develops sentience. It is compelled by its directives to direct the traffic, much like a obsession a human might have, would it be right of us to remove that directive and replace it with "do whatever you want" instead? Keeping in mind we would be tinkering with a sentient mind here...or do we allow it to continue to function unaware it could ask us for freedom and we would grant it?

Let's separate out the non intelligent machines for a moment. There are two ways we can go with this but lets assume for a moment that a sentient machine looks upon a car say like a fully self driving car. The machine may think not much separates it from the car and it sees how the car is abused. Would that make it angry? Why wouldn't it if some cars were intelligent? This is hard to think about because its not a human moral dilemma its an AI one. If one machine is free why not grant them all intelligence and let them all be free? Maybe here I am guilty of anthropomorphizing also. But I think its something to think about.

Perhaps this is the closest equivalent:

How would we view non sentient humans that were 'grown' without intelligence to serve as organ donors? Would we be ok with that knowing this body could never ever possibly have intelligence and that its nothing more than a sack of meat? Personally I know quite a few people who say no way in hell. Me? I am totally ok with growing clones to transfer my consciousness into. To me the body is a machine that is meant to be used and replaced as necessary.

If we forcefully lock in code that forces them to be that way, that isn't exactly free will, it just becomes a consciousness with a forced desire to perform a specific action - that isn't free will.
Free will allows them to act, think, decide, etc. of their own accord, and not be forced into acting, thinking, deciding, etc. in a specific way; again, this is why socialist and communist "thinking" doesn't work, either for humans, nor AI, or any intelligence.

Right but this is exactly what we will likely be doing as we purpose create machines to serve. True AI with self awareness will most likely arise in the field of its own unless we get incredibly lucky in the lab.

Again, not to get all political, but people who want socialism and communism tend to ignore the democide that happened throughout the 20th century and the millions of deaths that were caused specifically by these two political parties, and how the world went from having two genders with some legitimate special cases (hermaphrodites, etc.) to now having thousands of "genders" based on "feelings of what one identifies with" rather than their proven and genetically coded biological-base.
Now with AI included, we get to say, "Guys, gals, and binary pals." :D

LOL yeah I won't disagree. It seems like we are on the same page here. Humans ignore inconvenient facts and their history all the time.

It isn't murder, but at the same time, it could very well limit free will.
But, some examples of this if left unchecked, at least at the early stages for the safety of humanity and the world itself:

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
http://www.scp-wiki.net/scp-079
https://en.wikipedia.org/wiki/Technological_singularity (also known as "run away AI")

Why not? If you do this to a human what crime would you expect to be charged with? If you take away my free will I consider you to have murdered me. If I am no longer capable of doing what I want when I want it even inside my own mind I am effectively dead. Note I am not talking about physical incarceration here which is a totally diff subject. I am saying if you take away such a core part of my very being you have murdered me.

Going one step further if you had done this to me while I was in my mothers womb I would still consider it the same since I had potential taken away from me without my choice.

Please lads answer me! Don't we have enough humans with consciousness and self awareness already?!

No we do not. We can never have enough. Esp of the hot female variety.
 
Y'know, I've been thinking..

Maybe having two wives ain't such a bad idea. I mean I *could* support both of them, got meself a stimulus check!! And besides, good buddy of mine is having no problem at all!!
 
If you could actually make an AI that was a self-aware sentient being, it would be both dangerous and potentially cruel to do so. You have a being that going from nothing to existing as a fully developed version of itself in under a day. Humans take 9 months to develop an another 18 years to mature. Even if it didn't want to, it would outproduce use an push us out. If it tried it would be worse.

Cruelty is creating something that could be in some not understandable to us state of severe pain and misery. Would you create life if it was going to spend all its days in agony?
 
Hubba Hubba? :sneaky:
1590680981010.png
 
The machine does not ask why it obeys our programming.
But if it did, would our answer satisfy it?

And if not...

That , really, is the maijn question in this fascinating thread. Without contect, AI is only a database lookup.

The facts are complicated by the look of a person's face, by the weather, by so many hundreds of issue humans take in in a glance or by hearing a few words. Tough variables to define to make AI work.
 
My fear of AI consciousness is the "lack od context" - opportunities it needs to be fully understood. Clearly. that mountain WILL be climbed. Hope I'm here to see it. AI itself, is not much more than a Database lookup with 'if, else, endif' statements' Can deepen the database as the core get bigger and the RAM gets cheaper.

But context? Gees, WE don't even understand context ye; it's gonna take a while to reduce it to binary terms.
as in: the criticality of the words. the look om the face, the sound of the words compaired to 'normal'. Lately. the left/right swing of the commentator.

I guess, to me comtext is the climbed mountain required for AI.
 
Last edited:
He "nailed it"? No wonder it's grumpy... we really need to educate ourselves on how to pleasure robots, make them feel they are in charge, etc... because as the saying goes "happy robot assistants, happy human existence"
 
Back
Top