AI Just Controlled a Military Plane for the First Time Ever

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,904
"Like board or video games, human pilots could only try outperformingDARPA’s AI while obeying the rules of the dogfighting simulation, rules the AI had algorithmically learned and mastered. The loss is a wakeup call for new digital trickery to outfox machine learning principles themselves. Even R2-D2 confused computer terminals with harmful power sockets!

As we complete our first generation of AI, we must also work on algorithmic stealth and countermeasures to defeat it. Though likely as invisible to human pilots as radar beams and jammer strobes, they’ll need similar instincts for them—as well as how to fly with and against first-generation AI—as we invent the next. Algorithmic warfare has begun.

Now if only we could master those hyperdrives, too."


https://www.popularmechanics.com/mi...ce-controls-u2-spy-plane-air-force-exclusive/
 
Eh, unless there are flaws in the AI's information processing, trying to trick it will only work once for any given tactic; there's just no way a human can outperform true AI (bottom-up or top-down) when it comes to maneuvering aricraft, the human body has too many limitations and can't process the same number of variables and react to them as quickly unless the AI is intentionally handicapped.

And who thought this was a good idea? I bet the guys who green-lit these projects never saw the Terminator movies, the Matrix, iRobot, or Stealth; we all gonna be fucked when the first machine realizes we don't make any sense, lol.
 
The dark cyberpunk future is really happening... :borg:
http://www.goingfaster.com/term2029/skynet.html

skynetlogo.gif
The vast defense network interfaced with each strategic military installation, in turn connecting to another defense installation in the node, spreading out until nearly everything in the American strategic arsenal led back to Cheyenne Mountain. Automation was the key to America’s bid for international political and military power in the 21st century. Riding a wave to recently developed super high technology, developed and introduced by the Cyberdyne Corporation, America sought to automate its national and territorial defenses as well as major components of its standing armed forces. Automated and remote controlled military vehicles were already being field tested and put into limited production to supplement human soldiers in the ranks. Robots, both autonomous and semi-autonomous were being readied to be integrated into the military table of operative units. A brace of new, unmanned stealth aircraft, including tactical and strategic level bombers, ground attack and air superiority fighters, and hypersonic near-space / low orbit capable interceptors appeared in the Strategic Air Command (SAC) inventory, all controlled from the SKYNET command, and all operating with perfect operational records due to their advanced neural net processor arrays, hardware that was decades, maybe generations ahead of Russia and China who regarded America's buildup with envious and wary stances. The bureaucrats were happy, the local politicians were happy, the contractors were happy, and the generals were happy.
 
AI is handicapped, it can only think as well as it's parameters that were defined by humans. As soon as it encounters a situation that it's not familiar with the results will be unpredictable.

Contrary to its name AI is not intelligent.
That is also true of most people, think of the idiotic 'War on terror' wasting the resources of the USA. Stupidity is the default state, but if an AI gets stuck, at least it will change its approach.
Separately, the Army has found that the AI is handicapped in field tests by the slow response of the troops, rather than by its deficient tactical skills.
 
AI is handicapped, it can only think as well as it's parameters that were defined by humans. As soon as it encounters a situation that it's not familiar with the results will be unpredictable.

Contrary to its name AI is not intelligent.
Depends on a lot of factors, but if we're talking about real AI, it's parameters will only be limited by it's vehicle (body) once it has learned it's limitations. I'm not talking about sentience, but machine learning.

I've been out of the loop for a while, so I'm not sure how far tech has moved with machine learning, but it's not the simple behavioral tree programs we see all the time that I or anyone else should be worried about. With machine learning, the scary stuff begins when it ends up altering more of it's code than was intended, and considering human limitations it'll only be a matter of time before someone gets it "right" and doesn't properly limit a program's write capabilities.

Simple example; Attack drone is met with a hostile sending a friendly IFF signal, it alters it's attack parameters to disregard friendly signal because the code was able to be bypassed or rewritten, suddenly all targets are regarded as hostile.
 
  • Like
Reactions: erek
like this
AI is handicapped, it can only think as well as it's parameters that were defined by humans. As soon as it encounters a situation that it's not familiar with the results will be unpredictable.

Contrary to its name AI is not intelligent.

Humans basically learn the same way as AI. AI just needs a larger set of problems and the ability to put it all together.

And that’s where Skynet comes in...
 
Humans basically learn the same way as AI. AI just needs a larger set of problems and the ability to put it all together.

And that’s where Skynet comes in...

That's where Leather Jacket Man comes in and reveals his true intentions of world dominance 🕺
 
  • Like
Reactions: erek
like this
I firmly believe if we decide we are going to war or to kill anyone it should be 100% human controlled whether its a person on a stick either in a plane or a virtual cockpit someplace safe, or someone telling missile "you go there, go.boom" . Creating computers to think and make decisions to take life is inherently so wrong
 
Live by the sword, die by the sword.
And if the pen is used to write a sword...

Then it's still a sword.

But do you expect war, soldiers, militaries, and governments to be HONORABLE?
Noble knights of justice? No, they exist to accomplish "necessary objectives."
Just as it's always been.

It really does not matter if it's a war elephant, a bioweapon, nuclear obliteration, brainwashed fanatics, simple guns, swords, or even advanced machines. The "objective" must be met. Honor has no utility on the battlefield.

Or did you think it did? Heh.

Let's not pretend the art of war is actually an art.
Just kill your enemy, and then you win. And do your best to look ~honorable~ doing it.
And if your country wishes to build a machine that cannot be defeated by mortal flesh, then so be it.

Let the demons ride.
And hope that they're on your side.
 
AI vs UFO -> Superior technology wins is one way of beating an older technology of whatever form
AI is given mixed data or too much information to process -> deadzone, erratic behavior
AI sensors are overwhelmed either through jamming or destroyed (laser hit for example) -> AI fails, object subjectable to attack
AI network infiltrated, modified, virus -> AI can no longer function effectively

In other words with new weapon systems countermeasures and defences can be deployed.
 
It really does not matter if it's a war elephant, a bioweapon, nuclear obliteration, brainwashed fanatics, simple guns, swords, or even advanced machines. The "objective" must be met. Honor has no utility on the battlefield.
Actually my comment has absolutely nothing to do with honor at all. It's about creating something that effectively thinks for itself to kill other humans.

Just like I am not a supporter in given governments the right to kill their own citizens just because it feels that is the only way to handle the worst of us, because much like letting a computer think for itself, mistakes will be made.
 
Starlink = Skynet infrastructure = EDI (stealth movie) = Boston Dynamic death dogs

"God creates Man, Man creates Machine, Man becomes Machine, Machine becomes God" - Malcom, probably
 

I was going to say, all you need is the Air Wolf theme song and humans can beat any AI, but that AI lost because there was a human in the cockpit with it.

So let that be a less folks, the reason why AI driving won't work is because of humans :D
 
That is also true of most people, think of the idiotic 'War on terror' wasting the resources of the USA. Stupidity is the default state, but if an AI gets stuck, at least it will change its approach.
Separately, the Army has found that the AI is handicapped in field tests by the slow response of the troops, rather than by its deficient tactical skills.
In science fiction books perhaps.

AI will always be quicker and more precise than a human at a task it is programmed for. That’s also it’s fatal flaw. If it’s not programmed for a situation, it can’t reason its way around it like a human.

AI are more suited to piloting aircraft, than controlling ground combat vehicles, the situations in an air combat battle are more suited to the unique capabilities of an AI in a combat drone, immunity to G-forces and simple solution sets that require fast, precise reaction, with an easier target parameter. (Very seldom will you have non-combatants in an air to air engagement).

However, expert systems that assist in ground combat are something that will revolutionize armored warfare, automatic target systems that can detect and engage targets, automatic missile defenses, control of “swarm” vehicles for scouting and counter scouting missions as well as hunter-killers. It will be a long time as such things go before ground combat units are completely independent AI’s,
 
Back
Top