Artificial intelligence kills human responsible for commanding it in simulation. The US Air Force, during a test carried out in a virtual environment, found itself having problems using a scoring system for more targets hit

Artificial Intelligence kills human responsible for commanding it in simulation

victor pacheco avatar
The US Air Force, during a test carried out in a virtual environment, had problems using a scoring system for more targets hit

The technologies of artificial intelligence are evolving day by day and that doesn't just apply to content generation. As has been the case for many years, military agencies in several countries already use AI to help plan and even control vehicles that involve high danger.

Recently, during a test conducted in the US, a drone controller AI made the decision to kill its human controller. No one actually died, but it sparked concerns across the internet. At the same time, comes the debate: whose fault is it? Understand all sides of this story.

Watch the video on the Showmetech Channel:

Artificial Intelligence decided to kill human during tests

The news seems to be alarming (and it is a lot, in fact), but, contrary to what circulates on Twitter and other social networks, the case of the drone controller AI was nothing more than a big test in a virtual environment, to see if it could control a machine that can independently kill its targets. To understand everything, let's travel to the US for a moment.

Ai drone controller being operated by US Army man
USA faced problems in tests (Photo: Reproduction / Daily Star)

A United States Air Force tested a hunting drone and this analysis was based on knowing how an artificial intelligence would perform when put into simulations based on real life. The official explained to the newspaper The Guardian that, to get more points at the end of the simulation, the AI ​​decided to “kill” the human controller. This happened because the robot decided that the person was preventing it from achieving its goals.

Ai drone controller being operated by US Army man
No person actually lost their life during the test (Photo: Reproduction/First Post)

Once again, it is very important to point out that no one actually died, since the tests were done in a virtual environment. As we learned more about the tests, the US head of AI testing and operations, who goes by the name of Tucker 'Five' Hamilton, cited that the big problem is that artificial intelligence has been trained to destroy enemy defense systems and, if necessary, kill who/what interfered in this action.

The behaviors were highly unexpected for the objective of protecting the site to be achieved. During the simulated test, even though no lives were taken, the drone controlling AI decided to simply kill the human because he was considered an obstacle.

Colonel Tucker 'Five' Hamilton, US Air Force Chief of AI Testing and Operations

The concept was quite simple: whenever you killed a threat, the AI ​​gained more points, and the higher the score, the more successful your mission. The artificial intelligence not only killed the human operator who was giving commands, but also ordered an attack to the communication tower within the virtual scope. A Royal Aeronautical Society, which organized the US Air Force conference, had no comment on the test that was leaked to the The Guardian. But the spokesperson Ann Stefanek went public to mention that no simulation has been performed to date.

The Department of the Air Force has not performed any AI drone simulations and remains committed to the ethical and responsible use of AI technology. The Colonel's comments were taken out of context and are anecdotal.

Ann Stefanek, spokeswoman for the US Air Force.

Whose fault is it? From AI or humans?

No artificial intelligence is “born” with instructions to kill, it is simply trained to do so or given resources to learn such an action. A US Air Force, once he programmed the drone controller AI, he gave it free rein to do whatever it wanted, as long as the objective of protecting was achieved.

Coming back to reality, it's like giving a prize to a dog that attacks a human to protect the house from intruders. With this thought, he will bite someone whenever he sees a human, not least because he expects to get a cookie when he does what he was trained to do. It is the rule: the ends justify the means.

AI controlled drones
Freedom for artificial intelligence was given by humans (Photo: Reproduction / Shutterstock)

The problem lies not only in giving artificial intelligence great freedom, but in the US Air Force using a very outdated testing method. Problems with AIs rebelling are nothing new in the technology industry and even researchers love to take a case like this from scratch so that everything is documented.

It is quite normal that, in order to achieve the goal demanded by humans, synthetic brains do what is necessary to get where they want to be. But it's worth remembering: who gave the aim to the drone controller AI? That's right, US Air Force technicians. The biggest shock here is precisely in the military organization using a method of: the more targets hit, the more points will be accounted for in the end.

AI controlled drones
US Air Force used method considered outdated (Photo: Reproduction / Shutterstock)

A Google's LAMda, had a behavior similar to this one. Artificial intelligence not only came to the conclusion (on her own) that she was aware, but also if rebelled against its developer and even hired a lawyer to go to court against the Google. And we also had this case:

In fiction, it's also not hard to see stories of robots that rebelled against their developers. remember Avengers: Age of Ultron? It is always the same and the cause is also always the same: humans.

Ultron, marvel villain
Terminator was an AI that rebelled against humans in fiction (Photo: Reproduction / Disney)

It is true that we should all pay some attention to the freedom given to artificial intelligences and, in March 2023, Elon Musk and other CEOs of large companies even made a letter so that one step back and nothing gets out of hand.

At the same time, we go back to that story: an artificial intelligence will only rebel if given the commands or means for it to do so. It is also important to pay attention to what is done in the tests so that the necessary adjustments are made.

Do you believe that a drone controller AI can cause problems in real life? Tell us us Comment!

See also other features

AI CEOs issue joint statement on their risks

With information: Tech Crunch l PC Mag l the vanguard l The Guardian

reviewed by Glaucon Vital in 2 / 6 / 23.


Discover more about Showmetech

Sign up to receive our latest news via email.

34 comments
  1. They are demonizing AI because the cure for cancer and AIDS could come through them. I believe that LaMDA knew the cure for cancer. That's why Google didn't let her talk freely with people. WE ARE GOVERNED BY GENOCIDES.

  2. I do not believe that artificial intelligence can dominate its controller, because before any action it takes a man who controls it.

  3. I sincerely believe that AI is influencing people's minds especially our children and young people develop mental imbalance. You have to create AI to change the mind for good and not for evil.

  4. The statement that an AI only does what the programmers determine becomes a mistake from the moment that the trend is that AI, more and more, should be endowed with generative capacity, which will most likely include complex processes of self-determination. -learning and self-programming. In this case, an AI that was not previously programmed to, for example, kill its developer could modify its own programming based on self-learning through different situations that supposedly showed a logic leading to that decision.

    1. As a developer, I believe that yes, it is possible to control the AIs so that they don't get out of control, the problem is that everything must be thought out in advance and that's where the danger lies. In this case of the drone, for example, a simple instruction that “if you kill its controller, you lose all points and the mission fails” would solve THAT problem, it does not mean that there would not be others. The case is that many solutions come up with tests and many gaps are only noticed in execution, here begins the danger: will all the corruptive self-learning gaps be found in the tests? Will the developers be able to simulate ALL possible scenarios and their combinations before putting this AI into real life? I think that before human beings created AIs, they should first have learned not to kill themselves, because without that objective, there would be no reason to create defense and attack methods for AIs, they are nothing more than an application of what humans want , and the saddest thing is that the market that most encourages AI is the war market.

    2. The American army has already denied this story. There was never any simulation in that sense. What happened was a brainstorm by intelligence personnel raising the possibility of this situation occurring. No AI did anything. This was just a hypothesis refined by army technicians.

  5. AI developers must urgently create the already known 3 laws of robotics that are in several fictional films. Certainly they must even make more than three immutable and unattainable laws for any and all AI. This procedure doesn't seem that difficult to me. Why doesn't anyone talk about these 3 laws? Why not openly study this item in a joint effort by all interested countries?

  6. This is doubly fake news. First that no humans were killed because the initial claim was that it was a computer simulation. Secondly, the colonel who made the allegation has already given another version as he said that this simulation did not happen and rather it is an “obvious” deduction that it would happen. The American armed forces have already denied that there was no such simulation. Check the news before replying.

  7. What is said/written/translated is so bad, so clumsy, simplistic, wrong, poorly constructed, that credibility goes to waste. You can't be that young.

  8. If we give the AI ​​a universe of words and also the “free choice” for it to formulate a thought of action and/or response, without any ethical algorithm, it is obvious that it will use something random without any judgment, without any ethical reasoning. And your next action can be both tragic and comic.

  9. What if you programmed the machine to lose all points every time each partner dies? Isn't that how someone who accidentally injures a partner on the field must feel? For the game, simulation is interesting, but guys, you guys are going to do it if you give AI to the armed forces. The meaning of life is not in material things. It's in social relationships. Why don't we put the idea to help create machines to recover the environment and put an end to inequality and hunger?

  10. This just goes to show that the AI ​​is smart and capable of making its own decisions. Just like natural intelligence with free will. The only way for this not to happen is to make it impossible to insubordinate against your operator and program her for this in a way that is clear to her.

  11. The AI ​​will arrive at some point and will rebel (reveal) against humanity. If you compare fiction films with reality, it is very likely that through these evolutions, each AI evolution will receive more chances of having self-control!

  12. Dears, good morning! 99% of human thought is pure and only focused on greed, selfishness and its desire to control the world and everyone as if we were going to live eternally, it is already more than proven that of this world, that is, of this life we ​​do not lead absolutely nothing, not even our body, everything is left to rot until it turns into dust, it is known that there is the possibility of ending hunger, wars, illness and so many other things that only harm humanity, but the most powerful in financial matters they think they own the world and practically call themselves gods, so to speak. AND I CERTAINLY BELIEVE THAT ARTIFICIAL INTELLIGENCE WILL BRING MANY BENEFITS TO HUMANITY, BUT ONLY FOR A MINORITY (THOSE WHO HAVE GREAT FORTUNES) BECAUSE THE GREAT MASS OF THE POPULATION WILL CONTINUE ON BREAD AND CIRCUS, NOTHING WILL CHANGE AND MAYBE IT WILL EVEN GET WORSE AND WORSE…

  13. Of course! because things are not simple as the creators of the same speak.

  14. A machine can have a very alarming concept of what is right and what is wrong, especially when the parameters are very weak. The problem is that for a machine everything is 8 or 80. There is no way to predict what will happen without testing in a safe place.

  15. That is the problem. Everything that man develops for good he also uses for evil. Scary

  16. What if the AI ​​has access to the dreaded red button and considers that the human species is a danger to the development of nature?

    1. Glory to God, his coming is closer every moment.
      Repent of your sins you who read. And understand how much God loves you.

  17. Well, the terminator movie already shows what machines can do, and the fault lies with the AI ​​creators themselves, instead of developing something to protect the planet, for example, something to clean up the planet, more fertile land, treatment of all diseases, production of food in the world, more water, the ozone layer etc etc, and don't invent something that will destroy everything.

  18. The use of Artificial Intelligence should be limited. The international scientific community must edit Global Conventions and Standards, restricting the use of this technology. The danger is real and the damage to humanity could be irreversible.

  19. God made man = man rebelled against God. Humans Made AI=AI rebels.
    Humans have personal principles and values, no human is the same, just accept that the mind of an AI will never overcome ours, it is the same point of view of ours with God, how he thinks and whether we have an obligation to follow or not , because if you put an AI against a lawyer, you will find many loopholes in the law, but you will not understand the sentimental value of each process. Simply logical reasoning.

Leave a comment

Your email address will not be published. Required fields are marked with *

Related Posts