1 Ethic s, Autonom y & Practical Issues in Design ing Artificia l Moral Agents (Available at: https://www.analyticsindiamag.com/cracking - brain - code - new - paradigm - ai - research - companies - working - field/ . Accessed: 24/04/2019 ) by Alexand er J. Fordery BA Philosophy Department of Philosophy, University of Roehampton Supervis or : Dr Neil Williams Word Count: 10,5 49 26 th April 201 9 Table of Contents Introduction ........................................................................... ... p. 2 Chapter One: The Ethical Debate ...................... .. .......................... ........... .. p. 3 Chapter Two: The Autonomy Debate .......................................... ... .. p.1 4 Chapter Three: The Practical Issues........................... ...... ............ ... p.2 4 Conclusion............................................................... .............. p. 29 Bibliography.................................... ....................................... p.3 1 2 Introduction Within this work , I shall aim to critically d iscuss if something as complex as human morality , can be replicated into an ethical framework for artificial intelligence (AI) I believe this question to be of paramount importance in the modern philosophical landscape. If we are to enter this unchartered field of technology without some foresight – it will be a major issue by the time it fully comes to fruition. In my reading of the subject thus far, I have become increasingly aware of the necessity to explore and answer this question of what ethic s should be used in this technology. Regarding the structure of this piece, I shall divide my work into three chapte rs that follow from one to the other to build a viable ethical framework First , I shall focus on what kind of moral code could be implemented into a rtificial m oral a gent s (AMA s ) whilst adhering to Asimov’s three laws of robotics. The main two philosophical standpoints that will share a discussion between one and other shall be between utilitarianis m , and deontology Answering which would be better served to the existence of artificial intelligence. Ultimately, in the development of a morally viable AMA , a basis as to which kind of code it would follow must be understood and well debated . After all, the consequences to various scenarios in interactions with hu mans should completely depend on what programming w ould best protect us as humans. Following this first moral debate, the natural follow up leads to a question of autonomy. If we can manage to pass an agreeable artificial moral basis to develop this kind of artificial intelligence . H ow freely should said agent be able to operate? Undoubtedly the restraints placed upon the independence of these agents needs to also be questioned Thus, the second debate within this work will surround the autonomy of AM As Raising questions if autonomy is a necessity to insure the full capacity of these agents, and whether this creates or negates danger to humans. Finally, the last section of this work shall focus on the practical implications that AMAs could have across varying work forces in our everyday lives. Questioning what practical hurdles need to be resolved to be as effective as possible in differing scenarios. C losing this work, I hope to have conclusively argued for a fully realised as possible ethical AMA. Presenting a combination of philosophical standpoint s that I believe would b e provide a viable development of an artificial moral code. Thus , resulting in an answer that shines light on the title question at hand 3 Chapter 1: The Ethical Debate History of Ethics & Artificial Moral Agents - 1 Before the ethical debate surrounding what kind of moral code should be implemented into a rtificial m oral a gents (AMAs) can commence I believe that it is fundamentally required , to understand a brief history of how artificial intelligence has transitioned from science - fiction I nto , the very real realms of possibility that now realistically stand before humanity. Whilst t he alarmingly swift rise of A.I. systems within the last few decades was before his time – a grea t deal behind the first fictional imagination of robotics is owed to Asimov. Throughout his time as a novelist, Asimov became a pioneer of science - fiction with various short stories reflecting on the major “ what ifs ” surrounding robots and their potential relationships with human beings. Perhaps most famously, his collection of sho rt stories under the title ‘I, Robot ’ play a significant role in learning what Asimov wanted to accomplish with his fiction. Ultimately, he wanted it to be as true to life as possible , even if the earliest A I systems had only started to be developed just before his death in 1992. His influence being so important in this field that many theorists (both scientifically and philosophically), still use his ‘Three Laws of Robotics’ as a st arting point when discussin g robots. These rules provide a basis to which I shall also be sta rting from in discussing artificial existence. Asimov ’s three laws state: 1. A robot m ust not harm a human And it must now allow a human to be harmed. 2. A robot must obey a human’s order, unless that order conflicts with the First Law. 3. A robot must protect it self, unless this protection conflicts with the First or Second Laws. 1 These three laws permeate all levels of discourse surrounding the implementation of robots O nly becom ing increasingly more topical due to the advances in technology since Asimov’s death. I believe that it is fundamental to understand these laws, being the starting point of how the creation of a moral code from a philosophical standpoint could be possible Whilst these laws are my starting point in the creation of robots, we must also seek to understand what key features that robots would exhibit. 1 Asimov, I. (2008) I, Robot . P. Oxford: Macmillan Publishers Limited. p. 9. 4 Writing in his article ‘ Asimov’s Laws of Robotics: Implications for Information Technology ’ , Clarke briefly discusses the origin of robotics referencing Asimov. But also, to sequential to define the key proponents that should belong to robots. Akin to Asimov’s rule of three, Clarke employs the same ide to establish a clearer idea of what these robots should be capable of. Thus, Clarke states that robots shoul d possess: • programmability , implying computational or symbol - manipulative capabilities that a designer can combine as desired (a robot is a computer); • mechanical capability , enabling it to act on its environment rather than merely function as a data processing or computational device (a robot is a machine); and • flexibility , in that it can operate using a range of programs and manipulate and transport materials in a variety of ways. 2 Already , we can begin to see a formulation of how a robot could possibly function. By combining the rule of three from both Asimov and Clarke – a base idea of how this artificial intelligence could theoretically function comes to light. Continuing along this trail of thought, Clarke states that: With the merging of computers, telecommunications networks, robotics, and distributed systems software, as well as the multiorganizational application of hybrid technology, the distinction between computers and robots may become increasingly arbitrary. 3 It is this hastening towards the arbitrariness between computers and robotics T hat I believe needs to be tackled now , i ncluding the danger th at an unprepared moral agent may possess It is all well and good debating what kind of moral system a ‘task robot’ may have i.e. a machine that is designed to implement one simple task, say a computerised car. And the discourse surrounding such a thing is already well discussed However , I believe the next frontier to be the design process of A MAs Potentially, independent (although this will later be discussed) androids that could be deployed in a variety of different roles within our already established work sector. Ultimately, the potential that these agents have in theory is unlimited. Returning to Asimov’s fiction, his re - occurring constant motif for both good, and bad, is the impact said 2 Clarke, R. (2011). ‘ Asimov's Laws of Robotics : Implications for Information Technology ’. In : M. Anderson & S. Anderson (Eds.), Machine Ethics (pp. 254 - 284). Cambridge: Cambridge University Press. 3 Ibid. pp. 254 - 284. 5 creations could have on our species. As Bostrom puts it in his work ‘ Superintelligence: Paths, Dangers, Strategies’ , the immediate danger of these theoretical beings should be obvious to all, stating: If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And as the fate of the gorillas now depends more on us humans than the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence. 4 Although this conclusion may seem all doom and gloom for the future of our species I believe Bostrom to merely be highlighting th e importance of getting this next step in the evolution of technology right. Bostrom follows up by providing a more positive spin on how we are to shape potential AMAs , stating: ‘ We do have one advantage: we get to build the stuff. In principle, we could bui ld a kind of superintelligence that would protect human values ’ 5 It is in this advantageous starting point of the design process where I wish to begin my first debate surrounding a moral framework for AMAs . Conclusively, I believe that we must seize the designing process to build a machine that does as Bostrom says, ultimately, to protect human values and wellbeing Thus, my first debate shall centre around the debate of utilitarianism v. deontology , and which would better fit a n AMA in protecting human s. 4 Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies . P. Oxford: Oxford University Press. p. 7. 5 Ibid. p. 7. 6 Proposing a Utilitarian Ethical Framework for AMAs - 2 What I want to propose within this section is that the discourse surr ounding utilitarianism and deontology as ethics belonging to humanity – c an also be applied theoretically to AMAs. Especially i f these said agents are to replicate us (to what degree is debatable ) , both in appearance and values T h is history of ethical discourse should be available t o be relied on in what kind of ethical framework should be implemented. First , in this debate, I wish to focus on what the school of utilitarianism has to offer AMAs As well as what kinds of impact its implementation could have. This naturally will lead into the second part of this debate, in which I seek to counter with deontology and what it could offer in contrast to utilitarianism. By the end of this section, I wish to have shown which moral framework would be best suited not only on a functional level B ut also , regarding the promotion of human values and wellbeing Focusing o n the prospect of a utilitarian robot, Cloos within his article ‘The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism’ S eeks to highlight how a robot would function following this moral framework. Introducing his piece, he states: As autonomous mobile robots (AMRs) begin l iving in the home ... their actions will have profound ethical implications. Consequently, AMRs need to be outfitted with the ability to act morally with regard to human life and safety. 6 Effectively, these homecare units serve the purpose of enrichening h uman lives via interaction al care unique to their design. Ultimately, the end goal of a utility - based AMA is to achieve the maximisation of human utility T hus, all decisions should be made with this goal in mind. Cont inuing , Cloos highlights again another main focus of these creations – acting morally regarding human lives W ith this, the implementation of a moral framework becomes necessity. Cloos continues along this line of enquiry by stating: one glaring omission from the survey of research in human - robot interaction is ‘morality’. This omission is significant because there exists an ever - increasing need to create an AMR that can assess the consequences of its actions in relation to humans. 7 6 Cloos, C (2005). ‘ The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism ’ In: AAAI Fall Symposium - Technical Report. pp. 1 - 8. 7 Ibid. pp. 1 - 8. 7 Thus, the importance of getting this moral framework correct is signified. As is stated in the title of the project, this theoretical unit would determine its actions via a utilitarian framework. Thus, attempting to maximise human utility with each action taken. But , for this framework to possibly exist, the results of actions do need to be understood This would further develop the reasoning behind decisions an AMA could be capable of. Inevitably the promotion of protection will be of key concern, as well as the deterrence of any actions that could inversely negate it. Undeniably, the principle advantage that a utilitarian framework would provide is this maximisation of protection Re gardless of a deontological rule set blocking any actions Thus, potentially allowing a much wider set of options regarding how the AMA can interact in its environment. I believe this is the key to how Cloos’s propositions deviate from Asimov’s deontological approach. Its allowance for sporadic actions leads to an entire new set of questions we need to ask in designing AMAs. In being a new approach to interactivity, there is also a fund amental danger however that cannot be ignored. If we were to enable this kind of machine to interact with humans, there would also be a massive risk to the unpredictable actions its moral framework could allow. Reflecting on this, Cloos states: As robots progress in functionality their autonomy expands, and vice versa. With this synergistic growth new behaviors emerge. These emergent behaviors cannot be predicted from the robot’s programming because they arise out of the interplay of robot and envir onment. 8 Whilst this is a cause for concern regarding the practical functionality of utility - based robots, the identification of this early on in this theory allows for the danger to be solved. Having a robot that has no deontological rules whatsoever – w ould surely only create more danger than it negates. Recognising this danger, Cloos resorts to empirically studying the environment in which these kinds of robots would be functioning within. By analysing dangers that already occur in the home, Cloos aims to show just how essential it is to design a framewor k with human safety at the forefront , stating: A robot behaving unpredictably in an unpredictable environment is cause for further concern because the home is a dangerous place. Within the home there are 12 million 8 Ibid. pp. 1 - 8. 8 nonfatal unintentional injures each year . In the United States, unintentional injury is the fifth leading cause of death. 9 Once again, the stakes of integrating AMAs into the everyday lives of civilians is clearly apparent. Fundamentally, t here is a series of clearances a robot would need before its integration could become reality. Thus, I wish to display how Cloos attempts to resolve these issues tha t a utilitarian - based robot faces in interacting with humans. Primarily, Cloos’s main resolution to the issue of sporadic actions of AMAs resides in the utilitarian framework itself. If human safety is at the forefront of this framework, then the consequences of its actions should theoretically never result in action s that would cause injuries to their own owners. Regarding this, he states: A more viable long - term solution involves equipping the robot with the ability to decide, no matter the task underway or the state of human - robot interaction, the most appropriate course of action to promote human safety. 10 Allowing for AMAs to break already decided acts to carry out a new action could theoretically resolve this issue. Continuing his resolution, Cloos continues his line of thinking with further explanation as to how this could be practically achievable. Stating: If a robot employs an eudaimonic approach to ethical decision - making then the resultant behavior may be inline with the flourishing of physiological fun ctioning. The robot will be steered away from behaviors that deter the realization of well - being. 11 Whilst this is undeniably a potential form of robot utilitarianism in effect, how much does this truly deviate from what Asimov himself set out in his deontological structure? Cloos himself recognises that the presence of eudaimonia is nothing new in the debate of A.I. ethics. Regarding hi m , Cloos states that “This sentiment is also expressed in Asimov’s first law of robotics which roughly states that a robot must not, through action or inaction, injure or harm a human being.” 12 Ultimately, the lines between deontology and utilitarianism begin to blur E specially in the context of their shared g oal above all else – hu man safety Undeniably, it is the case that we are still technologically very far off from these kinds of robots coming to fruition. W hether these concepts of morality can ever be understood by an AMA is 9 Ibid. pp. 1 - 8. 10 Ibid. pp. 1 - 8. 11 Ibid. pp. 1 - 8. 12 Ibid. pp. 1 - 8. 9 a completely different challenge altogether. Reflecting on the work of Brooks, Cloos investigates the issues we may find in even getting over the first hurdle of designing morality (programming itsel f) , stating: Yet, the ability for a robot to follow ethical guidelines remains illusive. Rodney Brooks has confessed that robots cannot obey Asimov’s laws because roboticists are unable to build robots with the perceptual subtlety required to understand a nd apply abstract concepts, such as ‘injury’ and ‘harm’, to real - world situations (Brooks 2002). 13 At the heart of this issue is the question of how we are to replicate human ethics. How can programmers possibly convert abstract concepts into functional coding for an agent to follow? Cloos proposes that for this to even be remotely achievable, AMAs must possess a top - down format of intelligence rather than a bottom - up format. Even though this may bring his framework into question Cloos believes that these necessary moral values must be implemented with , and not arise from the robot in question. I wish to divert from Cloos’s essay here to present the difference between these two potential types of A.I. learning A s without this, I believe a full understanding of Cloos’ proposals wouldn’t be achievable Within his article ‘What is Artificial Intelligence?’, Copeland distinguishes the differences between these two types of learning in brief but effective summaries. Stating first that: In top - down AI, cognition is treated as a high - level phenomenon that is independent of the low - leve l details of the implementing mechanism – a brain in the case of a human being , and one or another design of electronic digital computer in the artificial case. 14 Effectively, this would mean for an AI that it would learn abstract moral concepts via the pr ogramming of descriptions regarding what things like ‘care’ and ‘ selflessness’ are R ather than being shown example instances of them. In comparison, bottom - up AI learning reverses this process. Summarising it, Copeland states: Researchers in bottom - up AI, or connectionism , take an opposite approach and simulate networks of artificial neurons that are similar to the neurons in the human 13 Ibid. pp. 1 - 8. 14 Copeland, J. (2000) ‘ What is Artificial Intelligence ?’ Available at: h ttp://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI09.html (Accessed: 22/01/2019) 10 brain. They then investigate what aspects of cognition can be recreated in these artificial networks. 15 Therefore , I believe that Cloos is correct in his decision to lead with top - down AI learning after detailing both frameworks for learning. A key element in justifying this , is that it recognises th e differences that AIs will face in learning concepts than we as humans do. Completely replicating how the human brain learns in a bottom - up way is a task far too difficult A nd , why would we want to achieve it anyway? We are aiming to design AMAs that can immediately be effective in helping humans No t creating agents that seek to learn values over a long period of time as children do Concluding this chapter of Cloos’s proposals, I wish to finish by showcasing his closing statements regarding his theory of a utilitarian AI moral framework. Ultimately, Cloos believes that a potential sy nthesis between utilitarianism and a top - down learni ng system can lead to the development of ethical and safe AMAs. He concludes by stating: A similar paradigm shift must accompany robot morality. This shift will come from a creative solution that enables a robot to begin acting ethically even if it cannot perceive ethical principles. Such an application of ethics to robotics is termed safety t hrough morality 16 Ultimately, I believe his proposals to provide a clear and concise basis which we can collectively begin to develop A MAs from By championing Asimov’s first law, a utilitarian code of ethics puts at the forefront what should be the primary concern regarding AI – human safety. Further potential synthesis between Asimov’s first deontological rule and utilitarianism , offers what I believe to be the most effective soluti on in de signing an ethically permissible AMA. This undoubtedly means that human safety should always be the highest priority Although I shall explore this concept further in the next chapter, following Asimov’s first rule (especially in prioritising human safety ) Could enable these AMAs to act ethically within utilitarianism without the fear of said programming causing unpredictable results. 15 Ibid. (Accessed: 22/01/2019) 16 Cloos, C (2005). ‘ The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism ’ In: AA AI Fall Symposium - Technical Report. pp. 1 - 8. 11 Evaluating a Utilitarian Ethical Framework for AMAs - 3 Beginning this second section, I believe Cloos’ proposals to be essential in developing AI ethics However, they would not be worthwhile if they were not able to respond to the philosophical charges put against it. Within this following section , I wish to discuss some issues with his utilitarian - based proposals . Especially, regarding how utilitarianism itself may differ between humans and potential AMAs in practice. By detailing and responding to these concerns – I will show how a utilitarian framework for AI desig n can still be the most viable ethical framework possible. Addressing the potential for these concerns, Cloos follows up his final statement of his theory by stating: Act utilitarianism, based on the utility of actions has been recognized as a theory amiable to quantification and thereby a natural choice for mach ine implementation. Yet, utilitarianism has also been critiqued as being a theory that is not practical for machine implementation. 17 With practical purpose being one of the questions at the forefront of my essay I t is of importance to discuss these c harg es mentioned by Cloos . Before any of these proposals can be truly viable for design, they must stand this test of v iability. Thus, I wish to divide this section into three primary concerns put forth against Cloos. These charges seek to investigate not only the utilitarian framework B ut also , how functioning utilitarian AMAs would work in accordance with th is system Systematically, I wish to present my potential solutions to each of the concerns raised against Cloos’s proposals. Come the end of this chapter, I wish to have demonstrated how a synthesis between deontology and utilitarianism can make - up a viable ethical framework for designing AMAs. The first of these concerns, relates to the individual environment s which an AMA may find itself residing in. Given that not every single household across the United States for example is a peaceful one, what kind of beings might develop considering their owners treatment ? One of the primary concerns rallied against utilitarianism – i s the potential justification of morally questionable and/or illegal acts that may come to fruition from the systems design. Regarding this potentiality , Cloos states: 17 Ibid. pp. 1 - 8. 12 the ground is fertilized for an autonomous mobile robot, living in a house where it is exposed to verbal or physical abuse, to learn to undertake threatening actions of its own accord. 18 This is an instance in which the synthesis between d eontology and utilitarianism c ould be made effective in the prevention of domestic abuse On a base level, deontological rules against the committing of violence c ould prevent the replication of their owners’ antisocial behaviour s Going a step further in co mbination with utilitarianism : it could also promote a ctions like calling the police to intervene if they believed it was necessary for the protection of the family. Within fiction , this kind of hypothetical scenario – a nd the consequences of it – has been explored In Quantic Dream’s ‘Detroit: Become Human’ 19 , one narrative sequence places the player in the role of a household - android ( similar in function t o Cloo s’ propos als ) Ultimately, the android can intervene in the domestic abuse of a father harming their daughter G uiding the daughter to safety and potentially causing no harm to either individual Thus, the potential for safe resolutions that maximise safety is shown to be a possibility. This is especially true if , AMAs are programmed with the appropriate ethical ruleset to protect children in cases of domestic abuse I believe the prevention of AMAs to replicate abusive owners via the use of deontolog ical rule – c ould potentially e nable the maximisation of safety regarding children within the household environment. Moving away from deontology , the second of these concerns relates to the question of omniscience of AMAs. Proposed by Anderson, Anderson & Armen, this evaluation of Cloos questions how the actions of agents can maximise utility , without knowing the consequences that will hitherto occur. Re flecting on this, Cloos cites in his article that: One such criticism holds that act - utilitarianism requires an agent to do what actually , as opposed what might , generate the best consequences. And this, of course, is impossible because it requires omniscience to know what will actually occur prior to its occurrence (Anderson, Anderson, and Armen 2004). 20 I believe this assessment of Cloos to be quite self - defeating in its charge. If the demand is for this AI system to be omniscient in some form , then of course it is impossible. But, j ust because 18 Ibid. pp. 1 - 8. 19 Quantic Dream. (2018) Detroit: Become Human [Video game]. San Mateo: Sony Interactive Entertainment. 20 Cloos, C (2005). ‘ The Utilibot Project: An Autonomous Mobile Robot B ased on Utilitarianism ’ In: AAAI Fall Symposium - Technical Report. pp. 1 - 8. 13 the AMA in question is deciding actions from what might be the most beneficia l – d oes not deme an the justifications for doing so. No robot in this framework can make decisions only out of self - interest. After all, they would be acting out of the best interest for their owners. T h us, the potential for robots to be more moral than humans are made possible. Clearing any form of biased that a human may possess. Using the lack of omniscience to justify these acts , seems to me to be an unfounded charge against Cloos One final charge against Cloos’ proposals relates to another issue raised against utilitarianis m. A ttempting to point out another insufficienc y of its real - world deployment. Citing Allen, Varner & Zinser, Cloos highlights this additional charge and the effects it has on utilitarianism. Stating that: Another critique is that consequentialism (i.e. the broad sense of utilitarianism) is computationally intractable for calculating real - world decisions in real - time. To paraphrase, it is impossible to calculate all the consequences of all the actions for al l people affected for all time (Allen, Varner and Zinser 2000). 21 Once again, I believe this charge against the use of utilitarianism in AI systems to be unfounded. What is being asked of an AMA via this line of thought is simply impossible Demanding a form of omniscience as a requirement of viability means that AMAs could never happen – e ven if they c ould potentially make more ethical decisions than human s Exploring this potential capability in fiction allows for another example to be put forwar d. In Fullbright’s ‘Tacoma’ 22 , the utilitarian - based decisions (and the consequences of it) of an AMA are explored. Namely, an AI system titled ODIN (Operational Data Interface Network) sav es the lives of the entire space crew on its ship. Averting a plot of corporate espionage that would have seen the ship and crew destroyed , if it was not for the system’s intervention. This highlights the potential that a utilitarian - based framework could have – without the need of calculations for every consequence. Simply put, the actual welfare o f the ship workers is put ahead of the financial gain of CEOs. And this is the kind of decisions you would hope that AMAs could be capable of. Regardless of the work environment where they are deployed, the priority of protecting human welfare stays the same. Clearly, o mniscience of every single 21 Ibid. pp. 1 - 8. 22 Fullbright. (2018) Tacoma [Video game]. Portland: Fullbright. 14 consequence or outcome isn’t a requirement for this to be enabled. Rather than prioritising capital gain. This syste m maintains the focus of human safety, safekeeping their livelihood. Thus, I hope to have shown how this ethi cal s ynthesis between utilitarianism and deontology can co - exist. Namely, that human wellbeing such be the priority of every decision If this is kept in mind, the charges put against Cloos seem to resolve themselves. Omniscience is no requirement for acting morally Especially if these AMAs have the potential to act more morally than humans themselves. Whether this is in protecting children or preventing corporate espionage The livelihood of humans under the protection of this AI system leads to a viable ethical framework. 15 Chapter Two: The Autonomy Debate Autonomy & Artificial Moral Agents - 4 Following from this resolution on a potential ethical framework for AI., the next question in designing AMAs is naturally one of autonomy. This following section will primarily focus on two questions. With the first being, what is autonomy to AI – and seco nd, what degree of autonomy should AMAs have in relation to their owners? These questions will be discussed with the overall aim being how to maximise the effectiveness of AMA s i.e. what degree of autonomy would allow for AMAs to be safe and secure whilst effectively maintainin g duties given ? First , I wish to detail what autonomy is defined as when discussed in robotics. T essier in her essay ‘Robots Autonomy: Some Technical Issues’, s eeks to set out a definition . Whilst also , explor ing the consequences of differing degrees of autonomy. She states: “autonomy will be defined and considered as a relative notion within a framework of authority sharing between the decision functions of the robot and the human being.” 23 Thus, autonomy for Tessier is derived from the relationship between huma n and A MA E specially regarding how the two interact when decisions are made. As another integral part to creating an ethical framework, interactivity must be a key focus. But , what does it mean for an AMA to be truly autonomous? Current uses of AI strongly revolve around a lack of autonomy i.e. automatically driven trains. An understanding of what defines an agent possessing true a utonomy must also be understood. For Tessier, the key classifications regarding th is difference in autonomy include: • making decisions, i.e., determining and planning actions on the basis of existing and produced knowledge; • carrying out actions in the physical world thanks to effec tors or through interfaces. A robot may also have capacities for: • communicating and interacting with human operators or users, or with other robots or r esources; • learning, which allows it to modify its behavior from its past experience. 24 23 Tessier, C. (2017) ‘ Robots Autonomy: Some Technical Issues’ In: Lawless, W.F. Mittu, R. Sofge, D. Russell, S. (ed.) Autonomy and Artificial Intelligence: A Threat or Savior? . New York: Springer Publishing. p. 179. 24 Ibid. p.180. 16 Eac h of these factors d efine what true autonomy is from automatic AI systems. It is clear to see that independence is the key factor for something to possess true autonomy. For me, the most important classifications regarding AMAs lie in the se additional capacities Factors lik e communication and learning allow for true autonomy P otentially shaping decisions that will be made by artificial agents. Putting these first two proposals together from Tessier allows for an immediate understanding of the relationship between autonomy and A MAs. Namely , it is integral that a communicative relationship between a human and an independent AMA exist – whilst also , respecting the authority that a human should have In c iting the Defense Science Board (2016) , Tess ier provides a concise definition of autonomy : To be autonomous, a system must have the capability to independently compose and select among different courses of acti on to accomplish goals based on its knowledge and understanding of the world, itself, and the situation 25 This regard for the situation at hand, is the reason why I believe it is key to discuss what kind of ethical framework is best fitted. Autonomy after all, will focus on the stage of acting upon deliberation in this ethical framework. Primarily, to what degree should said agent be able to enact upon these ethical decisions ? Tes sier seeks to give further clarity to this autonomous process of decision, stating: autonomy involves a decision loop that builds decisions according to the current situation. This loop includes two main functions: • the situation tracking function, which interprets the data gathered from the robot sensors and aggregates them — possibly with pre - existing information — so as to build, update and assess the current situation; the current situation includes the state of the robot, the state of the environment and the progress of the mission; • the decision function, which calculates and plans relevant actions given the current situation and the mission goals; the actions are then translated into control orders to be applied to the robot actuators. 26 By using this system, it is clear to see where the ethical framework and the question of autonomy divide in this essay. With the s ynthesis between act utilitarianism & deontology 25 (Defense Science Board 2016) Department of Defense, Defense Science Board (2016) Sum mer study on autonomy. In: Tessier, C. (2017) ‘ Robots Autonomy: Some Technical Issues’ In: Lawless, W.F. Mittu, R. Sofge, D. Russell, S. (ed.) Autonomy and Artificial Intelligence: A Threat or Savior? . New York: Springer Publishing. p. 181. 26 Tessier, C. ( 2017) ‘ Robots Autonomy: Some Technical Issues’ In: Lawless, W.F. Mittu, R. Sofge, D. Russell, S. (ed.) Autonomy and Artificial Intelligence: A Threat or Savior? . New York: Springer Publishing. p. 181. 17 boundaries (for certain things i.e. protection of children) , belonging to the first function. Whereas autonomy shall belong to the second function, relating to questions of independence from owner A nd to wh at degree, can this independence be acted upon. Enquiring further into this relationship between us as humans and A MAs : we must also decide upon what criteria needs to be met for autonomy to be deemed successful. For autonomous actions to be a success, I believe for non - harmful tasks that they must coincide with their owners wishes i.e. care duty for instance. Focusing on these decisions, Tessier furthe r explores this second function: The decision function aims at calculating one or several actions and determining when and how these actions should be performed by the robot. This may involve new resource allocations to already planned actions (for exampl e if the intended resources are missing), pre - existing alternate action model instantiation or partial replanning. The decision can be either a reaction or actions resulting from deliberation and reasoning. 27 To achieve success , an agent must have enough i ndependenc e to rectify any mis - planning of its own doing Undoubtedly, this should be an essential feature. Thus, an agent must be able to work within the parameters of the mission at hand to maximise the utility of the results. However, a risk immediately arises – w hat if this maximisation goes against their owner’s wishes ? This potential for friction between h uman and AI has the potential fo r serious consequences. Tessier replies to this concern , stating that : Such circumstances may lead to the occurrence of a conflict between the operator and the robot and may result in inappropriate or even dangerous decisions, as the operator may decide on the basis of a wrong state. 28 In pointing out the danger of human fal libility, Tessier identifies another issue with autonomy that needs to be addressed further. I believe that a n AMA must be deontologically wary to protect human lives i.e. protecting harm against all humans. This would prevent robots from bringing harm to others – only promoting self - defence precautions when necessary in the case of intruders to take protective measures i.e. calling the police. An immediate response to this (as I also covered earlier), could potentially be the banning of AMAs to use force in any scenario. However, I believe and have shown that A.I ethics can 27 Ibid. p. 185. 28 Ibid. p. 189. 18 promote safety if the correct ethical framework is in place. This combined with enough autonomy to function independently , could potentia lly allow for a Utility - Bot that can be ethically vaible But, f urther justifications for this kind of robot are also needed first and foremost Tessier further proposes how the correct ethical framework co - aligned with the correct level of autonomy could be valuable to humans . Stating that : • Ethical reasoning is essential for certain types of robots as soon as they are equipped with decision functions (see examples above); • When authority is shared between the robot and the human operato r, the robot could suggest possible decisions to the operator together with supporting and opposing arguments for each of them considering various ethical frameworks that the operator might not even contemplate. • A robot could be “more ethical” than a hu man being (Sullins 2010). 29 These three essential points enable an understanding of just how effective AMAs could be in the field – if the correct ethical framework and level of autonomy can be decided. Allowing for enough autonomy to enable dialogue betwe en human and robot, would be a potentially fundamental aspect. Whilst this may arguably be a requirement in line with the first point, the potential for a wide array of assistance becomes possible. If we take away some level of human fallibility i.e. biasedness, then we can perhaps even act more “ethically” as the th ird point states. In acting “more ethical” than humans , Sullins in his article ‘RoboWarfare: can robots be more ethical than humans on the battlefield? ’ 30 wants to propose how this could be possible. Ultimately, this can be achieved via the fallibility that we as humans possess. Disengaging human emotions like hatre d , can prevent rash - decisions being made. A robot in the guidelines I have propo