Weapons self : participate in the research or not
At the forefront of the ethical concerns raised by artificial intelligence (AI) are the “lethal weapons self” : these devices – such as the drone would be capable of locating a target, and then decide to take to eliminate it. But their definition remains complex (where begins the self ?) and we do not know today if such technologies exist.
This blur – as well as the financial bonanza that is the arms industry – in part explains the reluctance of the major companies in the digital sector to show clear positions on the subject. In June, Google made a splash by committing to not to put its technologies of AI, such as image recognition, in the service of the armament, after a controversy over a partnership with the Pentagon.
But the other major players in the sector are more reserved. Partnership we HAVE, which brings together around issues of ethical companies and associations, has “no official position on the subject,” explains its director, Terah Lyons, acknowledging the internal debate. “The idea that the armies of democratic countries, whose arsenal is designed for defence purposes and for the protection of human rights, make use of the latest advances in the computer don’t ask me any particular problem “, states, “as a staff,” Eric Horvitz, the director of the research center of Microsoft.
On the side of the united Nations (UN), the discussions on the subject began in 2013, in the framework of the Convention on certain conventional weapons. But the moratorium on weapons self-claimed by non-governmental organizations and researchers does not seem to close to reality.
Autonomous cars : the “dilemma of the tram,” and the responsibility
The problem is most dramatic posed by autonomous vehicles is the modern version of the well-known…