The Journal of Philosophy, Science & Law
Volume 13, May 24, 2013, pages 1-19
printer friendly version
Advanced Robotics: Changing the Nature of War and Thresholds
and Tolerance for Conflict - Implications for Research and Policy
Daniel Howlader and James Giordano
Introduction: From Fiction to Fact
The idea of robotic warriors is not new; depictions in popular media such as the Cylons from the Battlestar Galactica television shows, and Arnold Schwarzenegger’s iconic character from the Terminator series of films have kept discussions largely fictional, and not oriented toward the possible realties that might necessitate meaningful discourse on scientific, technological, ethical and policy issues. However, the construction and iterative designing of evermore-advanced robotics for both military and non-military purposes emphasizes the need and importance of such meaningful address to address the realistic potential and problems of current, and near future technology. For example, the introduction and growing use of unmanned aerial vehicles (UAVs) in both reconnaissance and combat scenarios over the last decade has prompted deeper speculation, and more probing questions about how such advanced technologies, including robots, could and will affect the nature of twenty-first century warfare. In light of these developments and possibilities, this essay will examine the concept of advanced robotics and how the introduction and widespread adoption of robotic technologies by military forces, specifically those of the United States, may affect both the conduct of warfare, and the tolerance and threshold for engaging in conflict.
Herein, we provide a brief discussion about distinctions between current science and technology (S/T; e.g. drones or unmanned systems), and future robotics, including a differentiation of the levels of agency that may be applied to, and obtained by military robots. From this, we address the current uses of drones as an exemplar of possible ways that more advanced robotics could be used in the near future. Various scenarios that depict the engagement of decisionally capable military robotics will be presented, and we conclude with a set of recommendations for forecasting, guidelines, and policies to direct and govern the use of military robotics in particular circumstances.
Important Operational Distinctions
This work focuses upon the ethical and policy issues associated with the development and use of robotics in future conflicts, how such advances in robotics technology may affect thresholds and tolerances that nation-states have for warfare, and how these factors might influence the scope and conduct of war. It is not, however, a treatise on the technical specifications or capabilities of robotics. Yet, given that any ethical address must begin from an assessment and appreciation of facts, and that forecasting fact-based implications is important to informing ethical decisions as well as guidelines and policies, we believe that it is therefore necessary to elucidate differences between the types, capabilities, and limitations of hardware that are currently utilized by military forces, and the types and abilities of technology currently in development and/or planned for military use in the coming years.
Despite an increased use of UAVs and related technology (primarily by the United States and its allies) in recent military operations, there remains some confusion about the terms drones, unmanned aerial vehicles, and robots. These terms are frequently - and incorrectly - used interchangeably, with many views confusing the UAVs used in warfare to be the technical, or even moral and ethical equivalent, of fully unmanned bombers or superiority fighter aircraft. Such misconceptions incur misapprehension, miscomprehension, and we posit, mishandling of the issues arising in and from the use of military robotics. While all of the aforementioned technologies represent “military machines” in the strict sense, similarities are mostly semantic, and it is important to differentiate between military drones and other, more advanced forms of robotics in specific contexts, as these distinctions are fundamental to authentically addressing both the technical and ethical issues arising therein.
Machines with Decisional Capacity
Thus, the remainder of this essay will focus upon the use of robots that manifest increasing capacity for complex decision-making (and in this way, perhaps some form of proto-moral agency), and how their use could alter the thresholds and tolerances for engagement and the conduct of future warfare. Complex decisional capacity is defined by the machine’s ability to parse situational variables and extant dispositions toward discriminating and executing a choice between two or more circumstantial actions and the outcomes produced. Important to this is that any such decisional action by the machine should be independent of direct human intervention or control. Within the context of military engagements, a machine with decisional capacity would be sufficiently advanced so as to distinguish between hostile and neutral or “friendly” vehicles (e.g., a warship vs. non-combatant cruise ship), and between agitated (but otherwise non-threatening and non-violent) protestors and a “determined enemy.” Once a machine with decisional capacities positively identifies the “determined enemy” as threats, it could engage with an appropriate level of force to neutralize the target, and would execute such actions independently of any local or remote human operator.
Perspectives on the extent and type of decisional capacity or agency manifested by machines vary; for example, one on hand there are binary distinctions that establish capacity for either completely amoral or moral entities, while on the other, there are continuous, graded scales of agentic capacity that reflect more sophisticated initial programming. Peter Asaro defines four levels of robotic agency: At the first level are “robots with moral significance” that are able to make basic decisions with rudimentary moral consequences that could be easily left to chance. Second and third tier iterations include machines with moral intelligence, defined by their ability to assess outcomes based upon predefined moral or ethical principles, and fourth level iterations possess a dynamic moral intelligence, that require initial advanced programming, but then acquire ability to define their own moral code based upon prior experiences. Asaro’s final tier of the continuum - machines with full agency -, would require some form of consciousness or at least a capacity to sense “harm” or physical threat, if not appreciate - and be averse to - the potential for their own destruction.
Asaro’s continuum of robotic agency can be applied to categorize the decisional capability of military hardware. Unmanned aerial vehicles or drones can be regarded as amoral machines that function without any decisional capacity, and are fully controlled by human operators; therefore all decisions relevant to moral consequences of the machine’s actions are made by the human operator (not the machine). A machine that is able to select targets autonomously, such as a satellite linked to ordnance launchers is an example of the first stage of agency: the satellite system can independently select appropriate targets, but requires human input to launch ordnance at the target. While such actions are greater than a mere decision of chance (i.e., as representative of Asaro’s first stage of agency), these activities still fall short of assessing outcomes based upon predefined proto-moral or ethical principles of Asaro’s second stage.
Machines with advanced initial programming (Asaro’s third stage) that are subsequently able to establish independent criteria for moral decision-making; or machines that obtain some form of consciousness (Asaro’s fourth stage), do not easily conform to, or describe existing military hardware, as current technologies do not yet manifest such characteristics. Potential examples of these future generation machines might include robotic infantry or an independent (non-human operated) vehicle that is able to discern friendly from hostile units via a training process similar to that provided to human soldiers, while a more advanced model – a true “machine-person” (a machine that obtains qualities of personhood, as described, for example, by White) might represent the fourth and final tier in Asaro’s categorization.
We posit that the use of evermore highly complex and competent, autonomous machines will affect the ways that wars are conducted, viewed, and tolerated. To be sure, machines at Asaro’s fourth tier of competence evoke a host of provocative ethical questions about the autonomy of humans and machines, the ways that human society might morally regard and treat conscious machines, and human responsibilities for how such machines might, and should be programmed and evolve. Yet, even current iterations of unmanned weapons give rise to ethico-legal questions about the ways that science and technology – and particularly computational science and neurobiotechnology – are studied, and used in the national defense and security agenda. A detailed examination of these ethico-legal issues exceeds the scope of the present essay.
Drones or Unmanned Systems
The majority of the United States’ current arsenal of military robots depends largely on remote split operations. These unmanned systems are based in theater and fly autonomously, but rely upon command and control instructions from off-site human operators to make lethal targeting decisions. Frequently referred to as “drones,” these unmanned systems operate on land, air, and sea (in both surface and underwater environments), and such vehicles limit human exposure to mundane or dangerous missions, while facilitating reconnaissance and surveillance capabilities important to intelligence collection. A universal definition for “unmanned vehicle” is lacking, although the concept of an unmanned technology has come to infer “automatic control.” Many contemporary unmanned systems are aerial vehicles, although ground robots such as the PackBot and TALON SWORDS, and marine robots like Boeing’s long term Mine Reconnaissance System are also employed. While interest in robots has increased in recent years, it could be argued that the use of weaponized technology reflects a desire to reduce possible harms incurred by human combatants by keeping them distant from the enemy and the proximate threats of the battlefield.
History reveals longstanding efforts to employ science and technology to achieve such ends, and provides support for this concept. The development of unmanned vehicles for use in battle began in earnest during the American Civil War, when balloons were used to deliver payloads of explosives. Subsequently, Nikola Tesla’s wirelessly controlled motorboats of 1915, and Elmer Sperry’s gyroscopically-guided aircraft formalized the concept of UAVs that could be employed on the battlefield. Although these devices were never operationally employed, several unmanned weapons’ systems were used during the First World War. The Second World War provided a stage for the advancement of additional unmanned systems, most notably the German FX-1400, an aircraft-launched and remotely-controlled flying bomb (i.e., essentially the first split-drone) that delivered a 3,000-pound payload that was operationally engaged to attack and sink the Italian warship Italia. Drones were also critical to Operation Aphrodite, during which American crews abandoned Boeing B-17s loaded with explosives that were then remotely flown into enemy targets. The Cold War evidenced a new era of strategic goals and objectives. Driven by the downing of a U-2 reconnaissance aircraft in Soviet airspace 1960, the US military more assertively advocated the development and use of unmanned surveillance vehicles. Cold War-vintage UAVs were remotely controlled, capable of operating at high altitude and super-sonic speed, and perhaps more importantly, could endure long missions with heavy payloads. , Following the Cold War, however, strategic and tactical objectives shifted toward acquiring real-time images and target identification with onboard flight control.
Current iterations of UAVs were utilized during the first Gulf War when real-time combat reconnaissance was provided by the RQ-2 Pioneer, an unmanned, multi-altitude vehicle that was jointly developed with Israel (based upon that country’s similar designs of UAVs that were used to successfully conduct surveillance missions in Lebanon). UAVs were also used in Kosovo, Iraq, and Afghanistan with measurable success. For example, in Afghanistan, both the MQ-1 Predator and RQ-4 Global Hawk conducted armed surveillance, although the Global Hawk was designed with capabilities that were more autonomous. While previous generations of drones enabled efficient surveillance and precision strike missions, these systems were predominantly weapons’ vehicles that provided means to deliver ordnance, but their limited autonomy required humans to remain in the control-loop. In contrast, contemporary drones such as the Predator and Global Hawk function with increasing control autonomy, and there is continued military demand for completely autonomous robotic devices that will be able to reduce the personnel, and resultant costs required for mission effectiveness and operational deployments. These autonomous robots would be self-directing, goal oriented, environmentally adaptive, and are proposed to reduce the time required for decision-making in “sense-and-trigger” scenarios.
Drones and Lessons for Robotic Warfare
The United States’ use of UAVs for military and military-supported counter-terrorism efforts has increased notably over the past decade, perhaps most significantly during the last four to five years. For instance, US drone strikes in Pakistan have increased from only nine strikes between 2004 and 2007 – to 34, 53 in 2008 and 2009 respectively, with 18 strikes in Pakistan registered as of the March 2010. Vincent Bataoel suggests that the numbers are even higher – with an average of 90 strikes per year since the beginning of 2009; using these numbers, Bataoel posits a 1700% increase in UAV use over the past decade. While it could be argued that this marked increase in UAV use reflects the increased conflict in, and focus on conflict in Pakistan by the United States, it is unlikely that such reasons solely contribute to these numbers. Former Central Intelligence Agency director and current Defense Secretary Leon Panetta has commented on the effectiveness of UAVs – both in terms of offensive value and in the ability to keep US assets relatively free from harm. Thus, the future use of drones or UAVs is likely to increase, as a function of a generally positive opinion of effectiveness, improved technology, and lowered costs of both production and employment. This pattern of technological improvement and cost reductions is likely to contribute to the pace and extent of robotics’ development and use in warfare.
Given the popular images of military robotics as legions of humanoid machine-soldiers and of UAVs as squadrons of insistently homing cybernetic aircraft, there is a possibility that perceived differences between current military hardware and what might constitute future military technologies are largely based upon appearance. Under this assumption, military robotics need only to look somewhat like human soldiers to be differentiated from current UAVs. Of course, this is incorrect, as the fundamental differences lie in the sophistication of the technology and the programming of the machine – specifically, decisional capacity, as previously discussed. The anthropomorphization (that is, machines that have human looking physical features, such as arms and legs, and/or which stand in an erect, hominid-like posture) of military machines is inconsequential to whether or not the hardware is defined as a drone or as something more advanced. In reality, the difference between a remotely controlled anthropomorphized infantry unit and a UAV are similar in that the agencies for any moral decisions (or any decisions for that matter) are in the hands of a remote human operator and not the machine. Conversely, a UAV or ground-based machine that is able to independently find, identify and attack targets without human intervention due to its initial programming and subsequent creation of an internal decisional (i.e., moral) code represents a far more advanced military robot. What is notable is that the trajectory for military robotics is oriented toward increasingly more complex machines that obtain a wider and more specific array of decisional capabilities – and agency.
Changing the Tolerance and Threshold for Warfare
Military robotics with advanced decisional capacity may reduce the tolerance that national governments (and the societies they represent) have for conflict and war for a number of reasons, including strategic and/or tactical concerns related to the battlefield effectiveness and utility of the machines in warfare, and the perceived risk of human casualties inflicted by machines. As well, financial incentives may lower the tolerance for robotic warfare, reflective of the high initial costs of new robotic technologies. In these scenarios, a society’s tolerance for conflict is a function of perceived necessity, given an expected value of the gains from success, the losses from failure, and a calculus of material and human costs from either outcome.
It is unlikely that nation-states would not engage in conflict in which only advanced robotic military equipment, or even less advanced drones were used – as such equipment would be relatively replaceable over the long-term – and therefore would incur little (direct) costs to the well-being of the combatants or populations involved. It could be argued that the zero human cost of robotic tanks, aircraft, or infantry would lower the overall expense of war and thereby promote conflict, and initially, it may. However, once robotic targets have been engaged and destroyed, direct robot-to-robot combat will become disadvantageous, and it is probable that attempts will be made to then target more tangible or advantageous (human) assets, including populated command, control and domiciliary centers –– if only to exact a higher (and ultimately more human) cost from the enemy. In this case, the effect on thresholds for warfare over the long-term could be ambiguous, as the introduction of robots onto battlefields may not significantly alter the calculus of gain or loss, as human assets (as well as physical assets such as supply lines or fixed structures) will still be primary targets. Robotic hardware may make the targeting of these enemy assets more effective or efficient – but will not dissuade nation-states from engaging in warfare to meet political, economic, social, or territorial goals.
One must assume that the citizens of any country that has developed such advanced robotics would be aware of the capabilities of the new machines as well as their purpose - to reduce the need for human beings on the battlefield, at least to some extent. Posed somewhat differently, advanced robotic military hardware must be relatively common knowledge amongst the population, and not confined to military and scientific hierarchies. With this knowledge, the population would very quickly lose their tolerance for human battlefield injuries or fatalities, given the perceived alternatives to human injury or death in the form of robotics. The true availability, viability, or cost of these new military technologies would likely be largely immaterial to the population’s reaction to human versus robotic engagements – as the public reaction to loss tends to be emotional rather than technical or financial (at least initially). Hence, the introduction of a viable robotic alternative to human casualties, in any form, would tend to drastically reduce a population’s tolerance for human harm.
Peter Asaro’s continuum of robotic agency provides cogent examples of possible outcomes with respect to technological advances’ effects on states’ and societies’ view of, and tolerance to warfare. According to Asaro, as development continues and robotic agency moves through various levels, the public’s tolerance for human casualties tends to fall at the same rate. This would also tend to decrease societies’ overall tolerance for warfare (given that human casualties are viewed as concomitant). That is to say, if the public perceives (rightly or wrongly) that policy makers are not substituting robotic destruction for human casualties for, tolerance for warfare could quickly be lost. However, it may also be that technical realities of robotic advancement (say from Asaro’s first to second, or second to third stages of agency) may be immaterial to the public, and so public perception of any type or level of robotic agency capable of diminishing human casualties – irrespective of whether the use of robots actually decreases human losses – could actually increase societies’ tolerance for warfare.
John S. Canning, an engineer with the United States Naval Surface Warfare Center, has proposed a model for the behavior of autonomous systems that would affect human casualties during war and their relationship to a population’s tolerance for protracted or limited conflict. Canning’s paradigm establishes that robotic systems should be programmed to neutralize other machines (standard military hardware or more advanced autonomous systems), while neutralization of human targets must always require a human operator – or at least, explicit authorization from a human. For example, Canning’s model would not require robotic hardware to receive authorization to neutralize unmanned weapons emplacements (assuming that such actions did not incur human causalities), but would require operator authorization to neutralize a manned enemy tank. Canning allows that “machines target other machines” while “men target men.” If adopted by all parties possessing advanced military robotics, this could likely lower the thresholds (and increase tolerance) for conflict, as the human costs would tend to be lower (than what presently occurs). In this light, the adoption of Canning’s model for all military robotics could easily lead to an overall rise in the number, though perhaps lesser intensity, of conflicts..
Nascent technologies are costly, and this is true of advanced weaponry in any form – especially robotics with the level of hardware and programming necessary to generate and sustain some form of cogent decisional (and perhaps ethical) constructs that could be used in war. While it is true that over time, the marginal cost of all technologies tends to fall, and this will likely occur with military robotics, as well, there is an indeterminate period during which marginal costs remain high so as to recoup the high fixed expenses of scientific and technologic research and development. High initial marginal costs (i.e., prior to the time when research and development (R/D) and production costs, and economic gains achieved through product sales and use reach some point of equilibrium) should tend to lower the tolerance and concomitantly raise the thresholds that certain nation states’ have for engaging in conflict. Moreover, the introduction of advanced robotics into military arsenals could alter the calculus of gains and losses in specific battlefield situations. For example, the high cost of producing and acquiring military robotics might dissuade military commanders from using such resources and assets in certain circumstances, and thus might alter strategic and tactical decisions and actions, and contribute, at least in part, to new variables in determining what constitutes acceptable loss(es).
Dissuasion will also occur at a macro level, although arguments for cost prohibition are somewhat less cogent than on a micro scale. It may be that if states are willing to go to war, the high cost of using - and losing (due to operational destruction) - robotic hardware might not necessarily dissuade heads of state from warfare, but could, however, affect the marginal thresholds for warfare. In the micro-tactical perspective, field commanders may view their limited supply of advanced robotic hardware as having a high cost, and might therefore choose to utilize the hardware sparingly. This effect would be magnified in the initial stages of conflict where replacement hardware may be more costly, or potentially unavailable. The cost-prohibitive nature of nascent robotic hardware could in fact be expensive enough to initiate non-hostile means of conflict resolution.
As well, the introduction of military hardware with decisional capacity may change the tolerance for conflict in ways similar to the threat of using nuclear weapons during the Cold War. In this scenario, one state may be able to deter or compel behavior through the possession of advanced military hardware. Deterrence attempts to prevent behavior by leveraging the high costs of an action against its relative gains. Compelling actions “employ threats to elicit behavior that would not otherwise be forthcoming.” The introduction of advanced military robotics onto the landscape of international relations may empower those states that possess such hardware to deter or compel would-be aggressors into alternative, non-military courses of settling differences, particularly when such technology is limited, and not widely available.
However, as previously alluded to, there is the very real possibility that widespread dissemination of robotic military hardware will lower nation states’ tolerance for warfare. So-called disposable soldiers, in the form of robotic infantry, that eliminate the need for human infantry (and the injuries and death that accompany infantry operations) constitutes a rather appealing prospect (ceteris paribus, non-human losses being preferable to human losses). Yet, we opine that this prospect may actually lead to increased conflict between nation-states. This becomes more apparent as advanced scientific and technological hardware becomes less costly, and the replacement of these machines, due to operational loses, becomes more fiscally palatable.
One counter to the argument for robotics increasing the tolerance, and decreasing the threshold for warfare asserts that the changing constructs and character of conflict (i.e., from Cold War-era large formations, to the more individual nature of current hybrid engagements) will make conflict more “personal” and thereby lower tolerance and elevate thresholds for war. This line of reasoning also claims that rather than encouraging conflict, the use of more advanced robots with increasing levels of decisional capacity will enable militaries to both conduct more specifically targeted missions and reduce the need, demand, and impetus to conduct any large-scale conflict.
Herein, there is a presumption that the model of conflict employed by western nations (especially the United States) since the beginning of the millennium will remain the norm for the next several decades. Namely, that military activities will remain focused upon counter-terrorism and counter-insurgency, with a relatively decentralized enemy, as opposed to conventional militaries, which are highly centralized. While this assumption may seem cogent in the short term (e.g., recent conflicts in the Afghanistan-Pakistan region that have operated according to such tactical and strategic frameworks), it may not be appropriate for long-term predictions about the nature of warfare. Given the shifting architectonics of science and technology, socio-economics, and changes in geo-politics, it is possible, if not probable that very differently scaled conflict(s) will be enacted using the most contemporary techniques and technologies.
In sum, we believe that the introduction of advanced robotics into future military operations will change the nature of warfare, and may impart effects upon tolerances and thresholds based upon the relative costs that nations (and populations) assign to human and technological resources. The most likely scenario will be one of alternating increases and decreases in the thresholds for engaging warfare, dictated by technological advances in robotics hardware and changing costs of robotics’ production. For example, the introduction of robotic replacements for human soldiers, if coupled to a decrease in a population’s tolerance for human casualties (as outlined above) would raise the threshold to engage in warfare. This could be due to 1) the prohibitive cost of production of the new technology, 2) the cost of operational losses of these new machines, and 3) the cost of human lives when robotic resources were exhausted or depleted.
Over time, however, decreases in production costs might make operational losses of decisionally capable robots more economically tolerable, and thresholds for engaging in conflict will fall, as acceptance of losses of such technology (versus production, and potential human-life costs) rise. As well, it may be that certain nation states might place a relatively higher value upon technological resources than human assets (due in part to factors such as large national population, and ideological constructs. In such cases, the threshold for warfare in which there is a greater potential for technological (vs. human) loss would be elevated, and tolerance for such loss would be low. What is more difficult to predict is how a potential for conflict that pitted humans directly against decisionally-capable robotic machines would affect nation states’ thresholds and tolerances for warfare, as differing values would contribute to distinct perspectives, intentions and actions relative to the probabilities of human and technological losses incurred.
To reiterate, we believe that advances in military robotics will alter the way countries fight wars – whether this change will be limited to the types of conflicts engaged, will obtain revisions in the upper and lower boundaries for tolerating conflict, or will entail some combination of both remains to be determined. Despite these predictive vagaries, it is important to attempt to anticipate how such changes in socio-economic postures toward warfare can - and should - be leveraged to affect the development of the types of advanced military hardware that will ultimately lead to forms of robotic aircraft, land craft, and naval vessels. Clearly, the role and value of UAVs to the United States’ defense tactics and foreign policy is evident, and we posit that this portends the relative perceived value of next generation robotic military hardware. The suggestion that the United States’ development of advanced robotic militaries (in any form) would perpetuate warfare on all scales, and advocacy for the purely peaceful use of advanced robotics and artificial intelligence ignores the historicity and realities of employing S/T in warfare. Moreover, it is probable that other countries will pursue robotic S/T. These factors, when taken together, might render the United States and its allies to be at a considerable tactical if not strategic disadvantage.
Scientific Progress, Future Forecasting and Paths Forward
Countering such contingencies compels a stance of S/T preparation through ongoing research, development, testing and evaluation, and ethico-legal, social, and economic prudence in any and all efforts dedicated to these goals, which may be gained through finely-grained inter-disciplinary discourse, and the articulation of pragmatically well-informed guidelines and policies for research, development and employment of state-of-the-art S/T, including advanced robotics. We assert that in these pursuits, current priorities must focus upon: 1) acknowledgement that robotic military devices can affect the conduct of, and amenability to conflict; 2) recognition of potential trajectories and valences of such effects, and 3) developing predictive models that can be used to elucidate end-state scenarios that would be useful for informing and planning current and future guidelines and policies. This necessitates technology forecasting and mapping, and the use of exploratory and normative methods to identify and assess technology, information and capability gaps, model possible futures, and propose actions to mitigate possible negative effects and aspects of S/T advancement in the social sphere – inclusive of the changing nature of warfare, and altering thresholds and tolerances for war.
A number of methods have been used in technology futures’ prediction including: group idea building and program planning, informed forecasting, and multi-criteria decision-making. Each have certain value, but when used singularly, remain somewhat limited in scope and application. This is particularly true when addressing rapidly developing S/T, with possibilities for multiple manifest effects within and across a range of social contexts and conditions. Linstone, Simmons, and Backstrand, Giordano and Giordano and Benedikter have suggested that such complex scenarios require forecasting, analytic and problem-solving methods that are more multi-dimensional in order to more accurately account for and plot the interactions of scientific, technological, social, economic and political variables. An integrative multi-disciplinary approach known as advanced integrative scientific convergence (AISC), has been proposed to achieve this type of combinatory analysis and prediction. Although a complete discussion of the constituent methods (and relative merits and criticisms) of AISC is beyond the scope of this essay, in the main, the approach entails three over-lapping practices: 1) foresight - the identification and realistic elucidation of the possible development of a particular S/T (i.e.- military robotics); 2) assessment of the potential effects of the S/T upon key domains of use (in this case, military operations, geo- and socio-political and economic stability; tolerances and thresholds for conflict), and 3) prediction, which provides definitions and descriptions of the emergent features, performance, and effects and manifestations of projected trajectories of S/T progress at defined points in the proximate, intermediate and more distal future. One of the more promising aspects of the AISC approach is the ability to use multiple assessments, together with mapping and modeling methods to define current patterns of S/T development and use in real-world scenarios as a basis from which to describe and predict extant and future problems.
In light of this, the AISC approach to advanced military robotics could be used to monitor new and emerging international S/T developments and plot current trends in military robotic development; relate these developments to advancements in other domains of S/T (e.g.- neuroscience, artificial intelligence, nanoscience, cybertechnology), and co-plot S/T developments with current and projected socio-political factors (i.e.- in multiple perspectives assessments; MPA) to model near- and intermediate-range future uses and effects within specific and general domains of engagement on local, regional and international scales.
At very least, this may enable more detailed descriptions of both the current state of the science and technology of military robotics, and the various socio-economic and geo-political variables that exert pushing and/or pulling forces upon the use and misuse of this S/T. More optimistically, AISC has been proposed to afford means to develop possible solutions for problems that may arise in fields of S/T (such as military robotics) where radical innovations would incur profound effects and influence upon social attitudes, conduct, and conditions. However, despite claims about such descriptive or resolution capabilities, and the anticipated promise of the methods used and outcomes achieved, it is important to recognize that AISC, like the advanced military robotics and other emerging iterations of high-impact S/T upon which its analyses focus, is an incipient enterprise. Thus, while shown to be viable in modeling and addressing the trajectories and multi-valent effects of certain technologies (e.g. - nanoscience and technology; neuroscience and technology), further work is needed to assess the value of this approach (and any others) to the complex interactions of S/T, social, political and economic variables that shape the use, applications, issues and problems of advanced military robotics.
While the possibility exists that the development of robotic militaries may lead to permanent lowering of the thresholds, and elevated tolerances for engaging in war (that would suggest that the widespread adoption of decisionally-capable military machines would be inappropriate), we believe that this is the least likely situation. A more likely situation, at least in the short to medium term, is that the high cost of robotics’ production will lead to a far lower acceptability level for warfare, and this will raise thresholds for war. Of course, it could be that research and development of such technology could escalate into large-scaled robotic wars. However, we believe that it is more probable that robotically-enabled, precision reconnaissance and/or tactical operations will become the norm, thereby 1) replacing older Napoleonic constructs of battle of massed armies, and 2) mitigating current forms of irregular warfare.
Given these outcomes, it is also possible that robotic technology may lead to a greater number of such operations over an extended period, this could incur a net increase in military conflicts over time, and this remains a concern. Thus, we opine that ongoing research is necessary, both to remain in pace with robotic technologies that can (and most probably will) be used in combat operations, and to evaluate the validity, viability and value of various methodological approaches to S/T forecasting, modeling and problem-resolution. Additionally, we opine the necessity for dedicated financial support, and administrative resources to insure the development and sustenance of ethico-legal, and social guidelines and policies to appraise, direct, and govern potential outcomes and effects generated by military robotics on the world stage. Our ongoing work remains committed to such studies.
This work was supported in part by grants from the J.W. Fulbright Foundation, Office of Naval Research, funding from the Division of Integrative Physiology and Pellegrino Center for Clinical Bioethics of Georgetown University, the Department of Electrical and Computational Engineering, University of New Mexico, the William H. and Ruth Crane Schaefer Endowment of Gallaudet University, Washington, DC, USA (JG), and the Center for Neurotechnology Studies of the Potomac Institute for Policy Studies (DH, JG). The authors thank Sherry Loveless for assistance on final preparation of the manuscript, and Meghan Casey for contribution to an earlier version of this manuscript.
About the Authors
Daniel Howlader, MPP, is a Visiting Researcher in the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics, Georgetown University Medical Center, Washington, DC, USA, and was formerly at the Center for Neurotechnology Studies at the Potomac Institute for Policy Studies, Arlington, VA. He graduated from the Public Policy Graduate Program at George Mason University, Fairfax, VA.
James Giordano, PhD, is Chief of the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics, and is on the faculty of the Division of Integrative Physiology in the Department of Biochemistry, the Inter-disciplinary Program in Neurosciences, and the Graduate Liberal Studies Program at Georgetown University, and is Clark Fellow in Neurosciences and Ethics at the Human Science Center, Ludwig Maximillians Universität, Munich, Germany. Dr. Giordano is the corresponding author and can be contacted at email@example.com or firstname.lastname@example.org.
Arkin, Ronald C. “Ethical Robots in Warfare.” IEEE Technology and Society Magazine 28, no.1 (2009): 30-33.
Arkin, Ronald C. “The Case for Ethical Autonomy in Unmanned Systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.
Asaro, Peter M. “What should we want from a robot ethic?” International Review of Information Ethics 6 (2006): 9-16.
Asaro, Peter M. “How just could a robot war be?” In Current Issues in Computing And Philosophy, edited by Adam Briggle, Katinka Waelbers, and Philip A. E. Brey, 50-64. Amsterdam: IOS Press, 2008.
Bataoel, Vincent. “On the use of drones in military operations in Libya: Ethical legal, and social issues.” Synesis: A Journal of Science, Technology, Ethics, and Policy 2, no.1 (2011): G69-G76.
Benedikter, Roland and James Giordano. “The outer and inner transformation of the global sphere through technology: The state of two fields in transition.” New Global Studies 5, no.2 (2011).
Bergen, Peter and Katherine Tiedemann. “The Year of the Drone.” New America Foundation, February 24, 2010. http://www.newamerica.net/publications/policy/the_year_of_the_drone.
Bone, Elizabeth and Christopher Bolkcom. Unmanned Aerial Vehicles: Background and Issues for Congress. Washington, DC: Congressional Research Service, 2003.
Bowie, Christopher J., Robert P. Haffa Jr., and Robert E. Mullins. Future War: What Trends in America’s Post-Cold War Military Conflicts Tell Us About Early 21st Century Warfare. Washington, DC: Northrop Grumman, 2003.
Cañadas, Alejandro and James Giordano. “A philosophically-based bio-psychosocial model of economics: Evolutionary perspectives of human resource utilization and the need for an integrative, multi-disciplinary approach to economics.” International Journal Interdisciplinary Social Sciences 5, no.8 (2010): 53-68.
Canning, John S. “A concept of operations for armed autonomous systems.” Presented at the 3rd Annual Disruptive Technology Conference. Washington, DC, September 6-7, 2006.
Danielson, Peter. “Engaging the Public in the Ethics of Robots for War and Peace.” Philosophy & Technology 24, no.3 (2011): 239-.249.
Delbecq, André L., Andrew H. Van de Ven, David H. Gustafson, and Andew Van De Ven Delberg. Group techniques for program planning: A guide to nominal group and Delphi processes. Glenview, Illinois: Scott Foresman, 1975.
Duffy, Brian R. “Anthropomorphism and the social robot.” Robotics and Autonomous Systems 42, no.3-4 (2003): 177-190.
Forsythe, Chris and James Giordano. “On the need for neurotechnology in the national intelligence and defense agenda: Scope and trajectory.” Synesis: A Journal of Science, Technology, Ethics, and Policy 2, no.1 (2011): T5-T8.
Foust, Joshua and Ashley S. Boyle “The Strategic Context of Lethal Drones: A framework for discussion.” American Security Project, August 16, 2012. http://americansecurityproject.org/featured-items/2012/the-strategic-context-of-lethal-drones-a-framework-for-discussion/.
Garamone, Jim. “From U.S. Civil War to Afghanistan: A Short History of UAVs.” American Forces Press Service, April 16, 2002. http://www.defense.gov/news/newsarticle.aspx?id=44164.
Giordano, James. “Integrative convergence in neuroscience: trajectories, problems and the need for a progressive neurobioethics.” In Technological Innovation in Sensing and Detecting Chemical, Biological, Radiological, Nuclear Threats and Ecological Terrorism, edited by Ashok Vaseashta, Eric Braman, and Philip Susmann. New York: Springer, 2012.
Giordano, James and Roland Benedikter. “An early - and necessary - flight of the Owl of Minerva: Neuroscience, neurotechnology, human socio-cultural boundaries, and the importance of neuroethics.” Journal of Evolution and Technology 22, no.1 (2013): 14-25.
Giordano, James, Chris Forsythe, and James Olds. “Neuroscience, neurotechnology and national security: The need for preparedness and an ethics of responsible action.” American Journal of Bioethics-Neuroscience 1, no.2 (2010): 1-3.
Giordano James and Rachel Wurzman. “Neurotechnology as weapons in national intelligence and defense.” Synesis: A Journal of Science, Technology, Ethics, and Policy 2, no.1 (2011): T55-T71.
Hassler, Donald M. and Clyde M. Wilcox. New Boundaries in Political Science Fiction. Columbia, South Carolina: University of South Carolina Press, 2008.
Hoffman, Frank G., and Chris Brown. Counter-Insurgency: Past, Present and Future. Arlington, Virginia: Potomac Institute Press, 2009.
Lebow, Richard Ned. “Deterrence and reassurance: lessons from the Cold War.” Global Dialogue 3, no.4 (2001): 119-132.
Lebow, Richard Ned and Janice Gross Stein. “Rational deterrence theory: I think, therefore I deter.” World Politics 41, no.2 (1989): 208-224.
Lin, Patrick, George Bekey, and Keith Abney. Autonomous military robotics: risk, ethics, and design. San Luis Obispo, California: Ethics + Emerging Sciences Group at California Polytechnic State University, 2007.
Linstone, Harold A., Walter Henry Clive Simmonds, and Göran Bäckstrand. Futures Research: New Directions. London, England: Addison-Wesley, 1977.
Martino, Joseph P. Technological Forecasting for Decision-Making. New York: Elsevier, 1972.
Moore, Carl M. Group Techniques for Idea Building. New York: Sage, 1987.
Moreno, Jonathan. Mind Wars: Brain Research and National Defense. New York: Dana Press, 2006.
Muldry, Pierre-André, Sarah Dégallier, and Aude G. Billard, “On the influence of symbols and myths in the responsibility ascription problem in roboethics - A roboticist’s perspective.” Paper presented at the 17th IEEE International Symposium on Robot and Human Interactive Communication, Berlin, August 1-3, 2008.
Porter, Alan L. and Scott W. Cunningham “Technology futures analysis: Toward integration of the field and new methods.” Technological Forecasting & Social Change 71, no.3 (204): 287-303.
Rees Jeanine and John Skeen. Recent Development Efforts for Military Airships. Washington, DC: Congressional Budget Office, 2011.
Sakamoto, Norman S. UAV Development and History at Northrop Grumman Corporation Ryan Aeronautical Center. Monterey, California: Wayne E. Meyer Institute of Systems Engineering, 2004.
Shachtman, Noah. “CIA Chief: Drones ‘Only Game in Town’ for Stopping Al Qaeda.” Wired, May 19, 2009. http://www.wired.com/dangerroom/2009/05/cia-chief-drones-only-game-in-town-for-stopping-al-qaeda/.
Sharkey, Noel. “The Automation and Proliferation of Military Drones and the Protection of Civilians.” Law, Innovation and Technology 3, no.2 (2011): 229-240.
Sharkey, Noel. “The evitability of autonomous robot warfare.” International Review of the Red Cross 94, no. 866 (2012): 1-13.
Singer, Peter W. “Robots at War: The New Battlefield.” The Wilson Quarterly 33, no.1 (2009): 30-48.
Singer, Peter W. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York: The Penguin Press, 2009.
Squadron Signal Publications. Ju 88 In Action Pt. I. Carrollton, Texas: Squadron Signal Publications, 2010.
Strawser, Bradley Jay. “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.” Journal of Military Ethics 9, no.4 (2010): 342-368.
Triantaphyllou, Evangelos. Multi-criteria Decision-Making: A Comparative Study. Dordrecht, Netherlands: Kluwer, 2000.
United States Army UAS Center for Excellence Staff. Eyes of the Army U.S. Army Roadmap for Unmanned Aircraft Systems 2010-2035. Fort Rucker, Alabama: US Army UAS Center of Excellence, 2011.
United States Department of Defense. FY2011-2036 Unmanned Systems Integrated Roadmap. Washington, DC: United States Department of Defense, 2011.
Van Creveld, Martin. The Age of Airpower. New York: PublicAffairs, 2011.
Van Der Meulen, Jan and Joseph Soeters. “Considering Casualties: Risk and Loss during Peacekeeping and Warmaking.” Armed Forces & Society 31, no. 4 (2005): 483-486.
Vaseashta, Ashok. “Nanotechnology: Challenges of convergence, heterogeneity, and hierarchical integration.” NATO Science Series 222 (2006): 229-230.
Vaseashta, Ashok. “Use of advanced scientific convergence in neurocognitive science and engineering.” In Advances in Neurotechnology: Premises, Potential and Problems, edited by James Giordano, 15-36. Boca Raton, Florida: CRC Press, 2012.
Vaseashta, Ashok and Ion N. Mihailescu. Functionalized Nanoscale Materials, Devices and Systems. Berlin: Springer, 2008.
Wallach, Wendell and Colin Allen. Moral Machines. Oxford: Oxford University Press, 2009
Wallach, Wendell, Colin Allen, and Stan Franklin. "Consciousness And Ethics: Artificially Conscious Moral Agents." International Journal of Machine Consciousness 30, no.1 (2011): 177-192.
White, Thomas. In Defense of Dolphins. Oxford: Blackwell, 2007.
Return to Home Page