Drones killing humans is a scenario we have been programmed to think will never happen. We think that all robots will have a built-in safety device that prohibits them from harming humans. Unfortunately, that concept comes from Hollywood, and not from reality.
Imagine a scenario that goes as follows: Allied troops have just wrapped up a field exercise in Poland, showing solidarity with the Polish people and authorities against growing Russian aggression. The soldiers are resting against their transports, even though they know the opposing Russian forces are only a few miles across the border. They gradually become aware of a faint buzzing sound, and notice a dark cloud crossing the horizon. Suddenly, a solitary scout drone swoops low over the Allied convoy. It apparently sees the troops, and reports their presence, because the distant cloud immediately stops moving, and turns towards the troops.
With astonishing speed the “cloud” of autonomous drones closes with its target. Each drone has been programmed to select an individual person, and the group programming ensures no two drones target the same soldier. The drones arm themselves, and quickly annihilate the soldiers before anyone has time to react. All this without a single input message from the drone controllers. Indeed, they have no controllers, just programmers.
Science fiction, Hollywood fiction? No, it’s a reality that’s already happened.
Sometime around March 2020, during the Second Libyan Civil War, the interim Libyan government attacked forces from the rival Haftar Affiliated Forces (HAF) with Turkish-made autonomous drones. A UN report found that the autonomous drones were programmed to attack targets “without requiring data connectivity with the operator.” That means the drones located and attacked the HAF forces independent of any kind of pilot or control mechanism.
Are you scared already? You should be!
The Turkish defence contractor that makes these drones, STM, explicitly markets them as capable of carrying out autonomous attacks.
Here’s how it works. The drone operator loads a set of target coordinates into the Kargu-2’s software. The drone then takes off, and travels to the coordinates, searching for objects on the ground that fit the profile of preferred targets. Once the drone identifies a target, it swoops down at high speed and detonates an onboard explosive package.
The drone looks like any other drone and trying to piece together the software from the remnants of the exploded unit would be impossible, so identifying it afterwards would be unlikely. Legal culpability impossible.
The U.S., and probably many nations, have similar drone programs. However, they mostly claim that the ultimate order to attack in their systems remains with a human operator. We are all familiar with the U.S. Predator attacks in Iraq and Afghanistan.
One: do you believe that? Two: that doesn’t prohibit less-ethical regimes from ignoring the minor detail of ultimate human control.
As the article I read about this concluded: “Some events in the history of mankind, like the 1945 atomic bomb test at the Alamogordo Bombing Range in New Mexico, are so profound, they serve as a divider between one social, economic, or military era and another. The events in Libya last year may similarly divide the time when humans had full control over their weapons and a time when machines make their own decisions to kill.”
Food for thought and contemplation, but, most of all, definitive action to stop this, if that’s even possible now the cat is out of the bag! Autonomous drones killing humans are here!