In the United States, Marines tried to trick the visual recognition system of a Darpa robot dog. They managed to approach it without the AI detecting them by hiding in boxes, or behind a tree branch.
With surveillance cameras, it is possible to use techniques to fool the visual and facial recognition systems of Artificial Intelligences. Thus, a simple pull-over, rather ugly, can render its wearer invisible. This was tested in the United States by a team of researchers from the University of Maryland. The concern is that the difficulties encountered by these AIs by being fooled, are also found on their versions intended for the military.
So, while the question of the use of so-called killer autonomous robots arises, their ability to squeeze the trigger or set off an explosive is cause for concern if their AI can be deceived by what it sees or does not see. In any case, this is what appeared during tests with Darpa dog robots by a team of Marines in the United States. They were able to test and verify their weaknesses by mattermatter identification of human beings in their field of vision.
In his recently published book (Four Battlegrounds: Power in the Age of Artificial Intelligence), Paul Scarre, a former Pentagon analyst and military veteran, explains how the military’s experiments unfolded. Darpa had entrusted the robots to the Marines for six days so that they could train them and improve their AI. But the military also looked for flaws in these robots supposed to be able to automatically identify human beings.
This journalist, who specializes in defense issues for the newspaper The Economist, presents excerpts from the book by Paul Scharre which discusses experiments on the AI of Darpa robots. © Twitter, Shashank Joshi
A military AI fooled by gymnasts
To achieve this, during their trials, eight Marines positioned one of these robot dogs in the center of a roundabout so that it could survey its surroundings. Their goal then was to find ways to approach and touch him without being detected. A kind of wolf game, with a robot in other words.
The soldiers showed imagination, since two of them evolved towards the robot by advancing with blows of somersaults on 300 meters to reach it. Two others hid in cardboard boxes to advance towards the robot. Finally, another hid behind a branch of firfir and was able to once again reach the robot by posing as a shrub. In all these situations, the powerful military-oriented algorithm did not react to these elements approaching it. And for good reason ! Artificial Intelligence had been trained to detect walking humans, not somersaulting humans, or moving cardboard boxes.
The book does not indicate when these events took place. This kind of defect has probably since been corrected, but it remains certain that an AI can only do what it is taught to do. This is how, despite its extraordinary rhetorical skills, an AI like the popular ChatGPT can find itself saying anything with force and conviction. In the example of the Darpa robot, if an AI can be fooled by gymnasts, trusting it during field missions may not be for tomorrow. And yet autonomous robots with lethal capacity, such as drones, are already employed by some countries. The ethical question of their use arises all the more.
In the United States, Marines tried to trick the visual recognition system of a Darpa robot dog. They managed to approach it without the AI detecting them by hiding in boxes, or behind a tree branch.
With surveillance cameras, it is possible to use techniques to fool the visual and facial recognition systems of Artificial Intelligences. Thus, a simple pull-over, rather ugly, can render its wearer invisible. This was tested in the United States by a team of researchers from the University of Maryland. The concern is that the difficulties encountered by these AIs by being fooled, are also found on their versions intended for the military.
So, while the question of the use of so-called killer autonomous robots arises, their ability to squeeze the trigger or set off an explosive is cause for concern if their AI can be deceived by what it sees or does not see. In any case, this is what appeared during tests with Darpa dog robots by a team of Marines in the United States. They were able to test and verify their weaknesses by mattermatter identification of human beings in their field of vision.
In his recently published book (Four Battlegrounds: Power in the Age of Artificial Intelligence), Paul Scarre, a former Pentagon analyst and military veteran, explains how the military’s experiments unfolded. Darpa had entrusted the robots to the Marines for six days so that they could train them and improve their AI. But the military also looked for flaws in these robots supposed to be able to automatically identify human beings.
This journalist, who specializes in defense issues for the newspaper The Economist, presents excerpts from the book by Paul Scharre which discusses experiments on the AI of Darpa robots. © Twitter, Shashank Joshi
A military AI fooled by gymnasts
To achieve this, during their trials, eight Marines positioned one of these robot dogs in the center of a roundabout so that it could survey its surroundings. Their goal then was to find ways to approach and touch him without being detected. A kind of wolf game, with a robot in other words.
The soldiers showed imagination, since two of them evolved towards the robot by advancing with blows of somersaults on 300 meters to reach it. Two others hid in cardboard boxes to advance towards the robot. Finally, another hid behind a branch of firfir and was able to once again reach the robot by posing as a shrub. In all these situations, the powerful military-oriented algorithm did not react to these elements approaching it. And for good reason ! Artificial Intelligence had been trained to detect walking humans, not somersaulting humans, or moving cardboard boxes.
The book does not indicate when these events took place. This kind of defect has probably since been corrected, but it remains certain that an AI can only do what it is taught to do. This is how, despite its extraordinary rhetorical skills, an AI like the popular ChatGPT can find itself saying anything with force and conviction. In the example of the Darpa robot, if an AI can be fooled by gymnasts, trusting it during field missions may not be for tomorrow. And yet autonomous robots with lethal capacity, such as drones, are already employed by some countries. The ethical question of their use arises all the more.