The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots

We are about as far away from an AI that can actually understand the world, as we are from writing on stone tablets.
Disagree. The speed at which we "advance" increases faster with technology advancement. We've had computers for what a little more than 70 years? (Not including analog/mechanical ones) we're infants in the computer age (yet look at what we have now). We really are babies when it comes to how "advanced" we are. Throw another 100 years of computing advancements and see where we are, then toss 1000 years ontop of that and see how far things go.
 
Except you are forgetting about the time scale. How many civilization destroy themselves or die out? Even if they are around for 100,000 years, that's a blink of an eye in the universe's time line. The odds that 2 civilizations are around at the same time, and close enough to contact each other is still very remote unless these civilizations last for millions of years.

And that's precisely why they believe ET probably is a robot :) You make some sort of automated system that has the ability to learn, you need a power source, then all the global warming, smog, lack of food, breathable air, etc... matters not.
 
If even a single civilization in the Milky Way has been around for 100,000+ years they should be all over the place. A fleet of exploration ships powered by nuclear pulse propulsion and building additional ships at each star could explore the entire galaxy in a million years or so. We're already capable of building such ships, though not of the necessary size and reliability just yet.
Conjecture.

First nuclear pulse propulsion while certainly better than conventional rockets, isn't that fast. Plus (and I know this is a bit of an ignorant view) look at us and what we would do, 100k years of civilization... that means over population issues, which means more resources dealing with people and not thinking about going somewhere, but lets assume they got their shit together and actually can get along and don't out breed their resources. "Hey we can send a probe to the nearest star with this technology, it'll only take 100 years to get there" is that something you think we'd spend money on? No way. While we would certainly use that to traverse our own solar system, I doubt much exploration would go beyond that unless you A) found a way to harness tremendous energy in which case you're limited by light speed or B) you find a way to go faster than light (which also needs tremendous energy). And those two might be the nails in the coffin of why civilizations haven't spawned out through the whole galaxy, if anything the vast energy part is probably the biggest issue, if there's no way to harness vast amounts of energy (and take it with you on a trip) then everything else is largely irrelevant.
 
Disagree. The speed at which we "advance" increases faster with technology advancement. We've had computers for what a little more than 70 years? (Not including analog/mechanical ones) we're infants in the computer age (yet look at what we have now). We really are babies when it comes to how "advanced" we are. Throw another 100 years of computing advancements and see where we are, then toss 1000 years ontop of that and see how far things go.

All major computer advancement has been a byproduct of material science. So computers seem to be snowballing in advancement, it is not for the most part self-advanced except when a materials engineer uses them to develop a guess at a new material.

The problem with material science is that there are only so many different atoms. There are quite a few combinations of those atoms, but even if the list is extremely large its finite. Our advancement so far may be entirely based on the "low hanging fruit" phase of investigating those materials and things will peak and the pace of advancement start to slow and eventually stop.
 
“All artificial life forms would need is raw materials,” he said. “They might be in deep space, hovering around a star, or feeding off a black hole’s energy at the center of the galaxy.” (That last idea has seen its way into a number of science fiction novels, including works by Greg Bear and Gregory Benford). Which is to say, they could be, essentially, anywhere.
So the Reapers are real?! :eek:
 
This. The common assumption that artificial intelligence has to occur on the equivalent of modern (but faster) CPU architectures is absurdly limited thinking. Even if possible, it may prove so impractical that other methods achieve greater success first.

Like organinc, self replicating machines with neural networks.

Oh wait.

In the context of wondering about things like this, artificial and robot lack a lot of meaning. Hell, we could be those robots after be left alone for a while as long as you don't insist on the notion of robots coming off an assembly line or being made out of metal and plastic and such.

Take that into account and it just boils down to alien life may be truly weird. Which.. like.. no duh.
 
Like organic, self replicating machines with neural networks. Oh wait.
Heh. Exactly. But why should it be surprising? We have no other working, recognizable models of cognition. Our best stabs at machine learning are incrementally improving imitations of natural systems. We have no other leads.

If nano-tech and bio-engineering ever took off and converged in the far, far future, artificial brains, in physical form, would surely be attempted. Will we achieve sufficient processing power and a deep enough understanding of cognition to achieve strong AI on current CPUs first? Damned if I know. Maybe there's a brilliant step in-between that no one has envisioned. Maybe not.

In the context of wondering about things like this, artificial and robot lack a lot of meaning. Hell, we could be those robots after be left alone for a while as long as you don't insist on the notion of robots coming off an assembly line or being made out of metal and plastic and such.

Whether one believes in the idea of a technological "Singularity" or not, writers on that subject have this stuff pretty well covered. We're already biological machines. We're also far more energy efficient machines than an equivalent computer would be trying to simulate our entire brain, and we're a lot easier to mass produce!

Take that into account and it just boils down to alien life may be truly weird. Which.. like.. no duh.
Alien life might be alien to us! Where's our Nobel Prize? :D
 
Conjecture.

First nuclear pulse propulsion while certainly better than conventional rockets, isn't that fast.

Yes it is. Orion style nuclear pulse propulsion could theoretically achieve 8-10% the speed of light. The Milky Way is about 100,000 light years across. A fleet of replicating Orion-style nuclear starships could visit every star in the galaxy in a little over a million years.

While we would certainly use that to traverse our own solar system, I doubt much exploration would go beyond that unless you A) found a way to harness tremendous energy in which case you're limited by light speed or B) you find a way to go faster than light (which also needs tremendous energy). And those two might be the nails in the coffin of why civilizations haven't spawned out through the whole galaxy, if anything the vast energy part is probably the biggest issue, if there's no way to harness vast amounts of energy (and take it with you on a trip) then everything else is largely irrelevant.

The voyages don't take that long. With nuclear pulse propulsion you could cover a light year in a decade or so. Nuclear propulsion is 50's technology, and I don't think it's outside the realm of possibility that a civilization only a few hundred years more advanced than us could build ships capable of surviving a few decades in deep space, enabling them to start hopping from star to star. If you could travel at relativistic speeds you could go anywhere, the downside being that traveling thousands of light years quickly also means watching thousands of years pass by quickly.

We can imagine any number of reasons why civilizations might choose not to explore other stars, maybe some revolutionary discovery we haven't reached yet makes it unnecessary. Like I said earlier, all we can really say about ET life is that it's possible.
 
I love how no one answered my question on teaching the machines good and evil.
You all talk about AI and getting there, but what happens after you reach that point?
 
Heh. Exactly. But why should it be surprising? We have no other working, recognizable models of cognition. Our best stabs at machine learning are incrementally improving imitations of natural systems. We have no other leads.

If nano-tech and bio-engineering ever took off and converged in the far, far future, artificial brains, in physical form, would surely be attempted. Will we achieve sufficient processing power and a deep enough understanding of cognition to achieve strong AI on current CPUs first? Damned if I know. Maybe there's a brilliant step in-between that no one has envisioned. Maybe not.
The future of the development of artificial intelligence by humans lies with quantum computing.
 
Yes it is. Orion style nuclear pulse propulsion could theoretically achieve 8-10% the speed of light. The Milky Way is about 100,000 light years across. A fleet of replicating Orion-style nuclear starships could visit every star in the galaxy in a little over a million years.
A little problem with that, it works by detonating nuclear bombs behind it. That means you need to carry a ton of fuel (aka nuclear bombs) so how many explosions are required to get up to .1c? Then there's the problem with "replicating" ships... you need materials, fissile or fusionable materials are not that common. I mean sure maybe some advance race managed to tap into some super practically unlimited power source and you could make your materials on the fly, but again that's pure conjecture

As of now we can only guess what "aliens" would do based upon what humans have done or thought up. Does an alien civ really want to send a craft somewhere if they know it'll take 50-100 years to get there? Unless it's doing it for the point of relocating, I don't see it.
 
I love how no one answered my question on teaching the machines good and evil. You all talk about AI and getting there, but what happens after you reach that point?
What were you expecting? We haven't solved that problem in humans either. :D

The future of the development of artificial intelligence by humans lies with quantum computing.
Absolutely. However, while ubiquitous quantum computing would be a huge boon to current and future AI algorithms, there's still no clear promise of strong AI in that direction.
 
Unfortunately, if we create human-level AI, and don't have those boundaries set, it will be our undoing.
Possibly. We could also destroy ourselves in dozens of other ways that don't include the creation of more sophisticated and potentially universe-exploring progeny. Given the other choices, I welcome our robot overlords.

Or it could end in disappointment. Imagine if all we can manage is to create a real-time simulation of Joe Shmoe's brain. A few "lucky" people get to experience temporary life extension in a disembodied hell that likely won't be anywhere near sophisticated enough to model the chemical complexities of brain function when it's first attempted. No neural growth, no long-term memory, limited emotion (no endocrine system!), and just maybe even unimaginable pain from simulated severed nerve endings. Be simulated and terminated, again and again, by professors and grad students who have no idea what they're doing. It would be illegal human experimentation, except that you may not have any rights, since the laws to protect you probably won't be in place. Sign me up!
 
Possibly. We could also destroy ourselves in dozens of other ways that don't include the creation of more sophisticated and potentially universe-exploring progeny. Given the other choices, I welcome our robot overlords.

Or it could end in disappointment. Imagine if all we can manage is to create a real-time simulation of Joe Shmoe's brain. A few "lucky" people get to experience temporary life extension in a disembodied hell that likely won't be anywhere near sophisticated enough to model the chemical complexities of brain function when it's first attempted. No neural growth, no long-term memory, limited emotion (no endocrine system!), and just maybe even unimaginable pain from simulated severed nerve endings. Be simulated and terminated, again and again, by professors and grad students who have no idea what they're doing. It would be illegal human experimentation, except that you may not have any rights, since the laws to protect you probably won't be in place. Sign me up!

Apparently you've never read/played "I Have no Mouth, and I Must Scream".

600full-i-have-no-mouth-and-i-must-scream-cover.jpg


Such a fun meatsack you will make for the machines. ;)
 
I've been saying this forever. When we are worried about alien contact, I always said we have a lot better chance of encountering a lifeless robot piloted explorer.
 
I call bullshit. The chances are much higher that the dominant form of life in the cosmos is single celled organisms. They are simple, hardy, and insanely abundant on this planet alone. They are some of the first versions of life to evolve on this planet, and are likely some of the first versions of life to evolve anywhere else in the cosmos given correct conditions. Other forms of life have evolved from them, and themselves have had over 4 billion years to evolve on this planet alone and up to 14 billion years to evolve elsehwere in the cosmos.

Had this article been titled "The Dominant Intelligent Life Form in the Cosmos is Probably Superintelligent Robots" then it might have a bit more truth to it.
 
Apparently you've never read/played "I Have no Mouth, and I Must Scream". Such a fun meatsack you will make for the machines. ;)
Figures I'm just rehashing science fiction from almost half a century ago. Originality is hard, dammit. I know I've heard that title before, but I've never read it. One more for the reading list.
 
A little problem with that, it works by detonating nuclear bombs behind it. That means you need to carry a ton of fuel (aka nuclear bombs) so how many explosions are required to get up to .1c? Then there's the problem with "replicating" ships... you need materials, fissile or fusionable materials are not that common. I mean sure maybe some advance race managed to tap into some super practically unlimited power source and you could make your materials on the fly, but again that's pure conjecture

Fusion-able material is the most common type of regular matter in the universe... :confused: The other construction materials should be available at essentially every star. The same elements and laws of physics exist everywhere we can see. It's just an engineering problem.

Regardless, the math has been done. You could get to a nearby star with a sufficiently large Orion without having to refuel along the way. The real problems are
1) The pusher plate needs a dampening mechanism for manned missions
2) Reliability. The ship has to survive and operate 40+ years

The design can be further improved with Daedalus-style fusion propulsion, though that wont be feasible if/until nuclear fusion can be used as an energy source or you're willing to carry a separate fission reactor for power.

As of now we can only guess what "aliens" would do based upon what humans have done or thought up.

Right. I'm not recommending running out and building an Orion tomorrow, my point was that it appears to simply be an engineering experience problem. Any civilization who's been around a bit longer should be capable of interstellar spaceflight. Whether they encounter some difficulty we haven't anticipated or they just hide from newly-emerging animals like us, or don't exist at all is up in the air. The only thing we can say for certain about alien life at this point is that it's possible.
 
AI shouldn't even really be distinguished as anything other than intelligence.

A domestic house cat may not build a space ship and travel to the moon anytime soon, but it has intelligence.

Our "AI" is rapidly approaching that level, with the ability to process huge amounts of data efficiently with a flawless massive database of information at its disposal for a truly "networked" mind.

Consider how long it has taken for intelligence to evolve on Earth (from the first rudimentary reactionary nervous systems to the human brain), and how rapid AI development has been in just the last twenty YEARS.

IMO, just as life has continued to evolve and improve itself over time, humans aren't the "end" species either, and I think its inevitable that what we call "AI" will be the next logical evolutionary step to replace man and there's nothing wrong with that. All species are just a link in a long chain towards life's goal of bringing order to disorder, or as someone once said the way for the universe to experience itself.

DNA isn't so special after all, its really just a very primitive information storage system. Now that we have far more efficient means of storing and passing on information, artificial life would be able to evolve, improve, and replicate itself at a far more rapid rate and expand across the cosmos.

Earth may be rare, but considering how ridiculously massive the universe is, its extremely unlikely there aren't a buttload of other systems just like Earth where life would have evolved just like here but earlier to the point that they are beyond their primitive biological origin.

This. It is conceit often based on religion, to think we are the ultimate evolution of life, or the only life. With how huge the universe is we are but specks of dust amidst an infestiminal slice of time. Thinking based solely on today is pathetically closed-minded. It is statistically almost a definite that we are very, very late to have existed compared to other life, and in progress naturally. What we currently refer to as AI will evolve and be improved, eventually becoming sentient or able to be so close as to be indistinguishable other than semantics. This isn't a bad thing, it is the natural order of time. And the only thing that may stop it is us humans wiping each other out too quickly.
 
I love how no one answered my question on teaching the machines good and evil.
You all talk about AI and getting there, but what happens after you reach that point?


Considering good and evil are man made ideas based on opinions, there is nothing to teach them.
 
I think it's funny how we're worried about aliens visiting earth - yet we stepped on the Moon and visited Mars. The first UFO isn't from Mars visiting Earth, it's from Earth and it visited Mars.

I think it's funny how we're worried about aliens abducting us and doing weird operations on us - yet we do just those things to animals.

I think it's funny how we're worried about aliens being smarter and more advanced than us - yet we're smarter and more advanced than all other known animals.

We're the goddamn aliens, now lets go into space and wreck some shit
 
Good luck programming them with good and evil:

I don't see how that would, at all, be hard. If we can program it to learn social norms, that would be it's good and it's evil. Since such things are purely on the view of the person and to an extent society that person is in. Ultimately, everyone views good and evil differently. A robot killing a human seems evil to us, but maybe not so much to the robot doing the killing.

The problem really, is to get the AI to that level where they can process things in such a way where it can receive the amount of information that we do, without overloading. We receive a gajillion amount every day, we just have wires that filter out a lot of it as standard or inconsequential.
 
Anyway, my original post when coming in this is..... did anyone think mass effect?
 
I don't see how that would, at all, be hard. If we can program it to learn social norms, that would be it's good and it's evil.

There has to be an ultimate authority though dictating good and evil.
For humans, it is God.

For robots, it is humans? Really? We can't even hold our own countries together, let alone dictate good and evil to an advanced AI.

Human arrogance knows no bounds.
 
There has to be an ultimate authority though dictating good and evil.
For humans, it is God.

For robots, it is humans? Really? We can't even hold our own countries together, let alone dictate good and evil to an advanced AI.

Human arrogance knows no bounds.

Lol what? Your ignorance knows no bounds.
 
There has to be an ultimate authority though dictating good and evil.
For humans, it is God.

For robots, it is humans? Really? We can't even hold our own countries together, let alone dictate good and evil to an advanced AI.

Human arrogance knows no bounds.

Lol, what on Earth are you on about? I'm with the guy above me, regarding your ignorance knowing no bounds.

P. S. When is the last time your particular religion's God spoke to you and said you're bad or good? Might want to find a doctor....
 
There has to be an ultimate authority though dictating good and evil.
For humans, it is God.


I never understood how there can be no right or wrong without a god. I guess it's just hard for me to accept that there are so many faithers out there who would be raping little girls and torturing babies if it was not forbade by [allah, god, yahweh, jesus, mohammad, joseph smith, budha, ... ]


even though im an atheist capable of discerning fair and unfair, im glad for religion, because i think it keeps these inherently immoral people on a tight leash.
 
I never understood how there can be no right or wrong without a god. I guess it's just hard for me to accept that there are so many faithers out there who would be raping little girls and torturing babies if it was not forbade by [allah, god, yahweh, jesus, mohammad, joseph smith, budha, ... ]


even though im an atheist capable of discerning fair and unfair, im glad for religion, because i think it keeps these inherently immoral people on a tight leash.


Quoted for truth and ditto.
 
Considering good and evil are man made ideas based on opinions, there is nothing to teach them.
Ultimately, everyone views good and evil differently.
Two quotes, proclaiming that good and evil are arbitrary and / or relative. You can find plenty more where these came from.

I never understood how there can be no right or wrong without a god.
A third quote, suggesting there can be right and wrong without a god. However, is this right and wrong you insist on arbitrary and relative, as the two quotes above suggest? If so, who cares? What good is it in terms of establishing an ultimate authority for an AI?

Or is this just an excuse to randomly attack someone for mentioning the concept of an ultimate authority (God, for example) in what was an appropriate context?

Even though im an atheist capable of discerning fair and unfair, im glad for religion, because i think it keeps these inherently immoral people on a tight leash.
Because when someone who might be religious makes a hypothetical statement, you have to take their thought experiment seriously, as if it were immediately applicable to them, as they are inherently immoral by their own admission, being religious and crazy and all. :rolleyes:
 
I love how people on here go out of their way to attack the idea of an ultimate Creator of everything.
I really hope you all live to see an advanced, intelligent, and self-aware AI, I really do.

You will find how quickly you will become irrelevent, as will or your arrogance:

"It wasn't a fair universe, nor a kind one. If there was a God, his love and forty-five cents would buy you coffee.
No one seemed to be at the cosmic controls anymore. It was every man for himself, until SKYNET became alive and
filled the void left by a seemingly disinterested God. Its vision was very controlled. The ultimate dream of man, carried out
by one of man's lowliest tools; eliminate evil men. But there was a touch of evil in all men, and SKYNET was having
trouble separating the worst of them out. So the totality of humanity, with all of its biologic messiness, wasn't wanted.
And to this machine-god, forgiveness just did not compute. Only cold retribution for the sins of the past."
Kettle meet pot.
 
Back
Top