Interview with professor Noel Sharkey, chair of the International Committee for Robot Arms Control
Noel Sharkey is Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control (ICRAC). Noel has worked in AI/robotics/machine learning and related disciplines for more than four decades. He held research and teaching positions in the US (Yale and Stanford) and the UK (Essex...
At the strategic level, war will mostly be waged in cyberspace. Tactically, we will witness the widespread use of autonomous weapons systems
This study presents the results of an analysis of future warfare. As the paper states, cyber warfare will be waged at a strategic level. The operative level will be characterized by the use of long-range precision weapons against economic infrastructure. The tactical level will be characterized by the massive use of autonomous ground-based, air and sea weapons...
Artificial intelligence in military affairs
Earlier this year, the author had an opportunity to participate in a
workshop
held under the auspices of SIPRI and the Pathfinder Foundation concerning the introduction of machine learning and autonomy in the nuclear forces-related systems. Interaction of new technologies (which include artificial intelligence in the broadest sense of the word) with means for preventing global conflict (as well as ensuring Armageddon if necessary) is one of the most...
Working Paper No. 44 / 2018
The Working Paper focuses on possible impacts of related technologies, such as machine learning and autonomous vehicles, on international relations and society. The authors also examine the ethical and legal aspects of the use of AI technologies. The present Working Paper of the Russian International Affairs Council (RIAC) includes analytical materials prepared by experts in the field of artificial intelligence, machine learning and autonomous system, as well as by lawyers...
Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world
Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads to the emergence of new threats, and it is crucial that we identify them in time.
Over the last decade, the development of technologies that can provide conventional weapons with unique capabilities typical of “killer robots”...
... Learning and Autonomy on Strategic Stability and Nuclear Risk.
Experts from Russia, China, the United States, France, Britain, Japan, South Korea, India, and Pakistan, attended the event to discuss the possible impact of machine learning technologies, autonomous systems, and artificial intelligence on the development of weapons and the possibility of their use in conflicts.
As a result of the conference, joint recommendations were developed to reduce the risk of escalation of relations between nuclear ...
... interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
***
There are no fundamental reasons why autonomous systems should not be legally liable for their actions.
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore,...