Studies in Systems, Decision and Control 117 Hussein A. Abbass Jason Scholz Darryn J. Reid Editors Foundations of Trusted Autonomy Studies in Systems, Decision and Control Volume 117 Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: kacprzyk@ibspan.waw.pl The series “ Studies in Systems, Decision and Control ” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control- quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fi elds of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. More information about this series at http://www.springer.com/series/13304 Hussein A. Abbass • Jason Scholz Darryn J. Reid Editors Foundations of Trusted Autonomy Editors Hussein A. Abbass School of Engineering and IT University of New South Wales Canberra, ACT Australia Jason Scholz Defence Science and Technology Group Joint and Operations Analysis Division Edinburgh, SA Australia Darryn J. Reid Defence Science and Technology Group Joint and Operations Analysis Division Edinburgh, SA Australia ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-319-64815-6 ISBN 978-3-319-64816-3 (eBook) https://doi.org/10.1007/978-3-319-64816-3 Library of Congress Control Number: 2017949139 © The Editor(s) (if applicable) and The Author(s) 2018. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adap- tation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book ’ s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book ’ s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publi- cation does not imply, even in the absence of a speci fi c statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional af fi liations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland To a future where humans and machines live together in harmony. Foreword Technology-dependent industries and agencies, such as Defence, are keenly seek- ing game-changing capability in trusted autonomous systems. However, behind the research and development of these technologies is the story of the people, collab- oration and the potential of technology. The motivation for Defence in sponsoring the open publication of this exciting new book is to accelerate Australia ’ s Defence science and technology in Trusted Autonomous Systems to a world-class standard. This journey began in July 2015 with a fi rst invitational symposium hosted in Australia with some of the world-class researchers featured in this book in attendance. Since that time, engagement across the academic sector both nationally and internationally has grown steadily. In the near future in Australia, we look forward to establishing a Defence Cooperative Research Centre that will further develop our national research talent and sow the seeds of a new generation of systems for Defence. Looking back over the last century at the predictions made about general pur- pose robotics and AI in particular, it seems appropriate to ask “ so where are all the robots? ” Why don't we see them more embedded in society? Is it because they can't deal with the inevitable unpredictability of open environments — in the case for the military, situations that are contested? Is it because these machines are simply not smart enough? Or is it because humans cannot trust them? For the military, these problems may well be the hardest challenges of all, as failure may come with high consequences. This book then appropriately in the spirit of foundations examines the topic with an open and enquiring fl avour, teasing apart critical philosophical, scienti fi c, mathematical, application and ethical issues, rather than assuming a stance of advocacy. vii The full story has not yet been written but it has begun, and I believe this contribution will take us forward. My thanks in particular to the authors and the editors, Prof. Hussein A. Abbass at the University of New South Wales for his sustained effort and art of gentle persuasion, and my own Defence Scientist, Research Leader Dr. Jason Scholz and Principal Scientist Dr. Darryn J. Reid. Canberra, Australia April 2017 Dr. Alex Zelinsky Chief Defence Scientist of Australia viii Foreword Preface Targeting scientists, researchers, practitioners and technologists, this book brings contributions from like-minded authors to offer the basics, the challenges and the state of the art on trusted autonomous systems in a single volume. On the one hand, the fi eld of autonomous systems has been focusing on tech- nologies including robotics and arti fi cial intelligence. On the other hand, the trust dimension has been studied by social scientists, philosophers, human factors spe- cialists and human – computer interaction researchers. This book draws threads from these diverse communities to blend the technical, social and practical foundations to the emerging fi eld of trusted autonomous systems. The book is structured in three parts. Each part contains chapters written by eminent researchers and supplemented with short chapters written by high calibre and outstanding practitioners and users of this fi eld. The fi rst part covers founda- tional arti fi cial intelligence technologies. The second part focuses on the trust dimension and covers philosophical, practical and technological perspectives on trust. The third part brings about advanced topics necessary to create future trusted autonomous systems. The book is written by researchers and practitioners to cover different types of readership. It contains chapters that showcase scenarios to bring to practitioners the opportunities and challenges that autonomous systems may impose on the society. Examples of these perspectives include challenges in Cyber Security, Defence and Space Operations. But it is also a useful reference for graduate students in engi- neering, computer science, cognitive science and philosophy. Examples of topics covered include Universal Arti fi cial Intelligence, Goal Reasoning, Human – Robotic Interaction, Computational Motivation and Swarm Intelligence. Canberra, Australia Hussein A. Abbass Edinburgh, Australia Jason Scholz Edinburgh, Australia Darryn J. Reid March 2017 ix Acknowledgements The editors wish to thank all authors for their contributions to this book and for their patience during the development of the book. A special thanks go to the Defence Science and Technology Group, Department of Defence, Australia, for funding this project to make the book public access. Thanks also are due to the University of New South Wales in Canberra (UNSW Canberra) for the time taken by the fi rst editor for this book project. xi Contents 1 Foundations of Trusted Autonomy: An Introduction . . . . . . . . . . . 1 Hussein A. Abbass, Jason Scholz and Darryn J. Reid Part I Autonomy 2 Universal Arti fi cial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Tom Everitt and Marcus Hutter 3 Goal Reasoning and Trusted Autonomy . . . . . . . . . . . . . . . . . . . . . 47 Benjamin Johnson, Michael W. Floyd, Alexandra Coman, Mark A. Wilson and David W. Aha 4 Social Planning for Trusted Autonomy . . . . . . . . . . . . . . . . . . . . . . 67 Tim Miller, Adrian R. Pearce and Liz Sonenberg 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Bobby D. Bryant and Risto Miikkulainen 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 John Harvey 7 Trusted Autonomous Game Play . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Michael Barlow Part II Trust 8 The Role of Trust in Human-Robot Interaction . . . . . . . . . . . . . . . 135 Michael Lewis, Katia Sycara and Phillip Walker 9 Trustworthiness of Autonomous Systems . . . . . . . . . . . . . . . . . . . . 161 S. Kate Devitt xiii 10 Trusted Autonomy Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . 185 Michael Smithson 11 The Need for Trusted Autonomy in Military Cyber Security . . . . . 203 Andrew Dowse 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Peter D. Bruza and Eduard C. Hoenkamp 13 Learning to Shape Errors with a Confusion Objective . . . . . . . . . . 225 Jason Scholz 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Justin W. Hart, Sara Sheikholeslami, Brian Gleeson, Elizabeth Croft, Karon MacLean, Frank P. Ferrie, Cl é ment Gosselin and Denis Laurandeau Part III Trusted Autonomy 15 Intrinsic Motivation for Truly Autonomous Agents . . . . . . . . . . . . 273 Ron Sun 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Kathryn Merrick, Adam Klyne and Medria Hardhienata 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Selmer Bringsjord and Naveen Sundar Govindarajulu 18 Trusted Autonomous Command and Control . . . . . . . . . . . . . . . . . 337 Noel Derwort 19 Trusted Autonomy in Training: A Future Scenario . . . . . . . . . . . . 347 Leon D. Young 20 Future Trusted Autonomous Space Scenarios . . . . . . . . . . . . . . . . . 355 Russell Boyce and Douglas Grif fi n 21 An Autonomy Interrogative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Darryn J. Reid Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 xiv Contents Contributors Hussein A. Abbass School of Engineering and Information Technology, University of New South Wales, Canberra, ACT, Australia David W. Aha Navy Center for Applied Research in AI, US Naval Research Laboratory, Washington DC, USA Michael Barlow School of Engineering and IT, UNSW, Canberra, Australia Russell Boyce University of New South Wales, Canberra, Australia Selmer Bringsjord Rensselaer AI & Reasoning (RAIR) Lab, Department of Cognitive Science, Department of Computer Science, Rensselaer Polytechnic Institute (RPI), Troy, NY, USA Peter D. Bruza Information Systems School, Queensland University of Technology (QUT), Brisbane, Australia Bobby D. Bryant Department of Computer Sciences, University of Texas at Austin, Austin, USA Alexandra Coman NRC Research Associate at the US Naval Research Laboratory, Washington DC, USA Elizabeth Croft Department of Mechanical Engineering, University of British Columbia, Vancouver, Canada Noel Derwort Department of Defence, Canberra, Australia Andrew Dowse Department of Defence, Canberra, Australia Tom Everitt Australian National University, Canberra, Australia Frank P. Ferrie Department of Electrical and Computer Engineering, McGill University, Montreal, Canada Michael W. Floyd Knexus Research Corporation, Spring fi eld, VA, USA xv Brian Gleeson Department of Computer Science, University of British Columbia, Vancouver, Canada Cl é ment Gosselin Department of Mechanical Engineering, Laval University, Quebec City, Canada Naveen Sundar Govindarajulu Rensselaer AI & Reasoning (RAIR) Lab, Department of Cognitive Science, Department of Computer Science, Rensselaer Polytechnic Institute (RPI), Troy, NY, USA Douglas Grif fi n University of New South Wales, Canberra, Australia Medria Hardhienata School of Engineering and Information Technology, University of New South Wales, Canberra, Australia Justin W. Hart Department of Computer Science, University of Texas at Austin, Austin, USA; Department of Mechanical Engineering, University of British Columbia, Vancouver, Canada John Harvey School of Engineering and Information Technology, University of New South Wales, Canberra, Australia Eduard C. Hoenkamp Information Systems School, Queensland University of Technology (QUT), Brisbane, Australia; Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands Marcus Hutter Australian National University, Canberra, Australia Benjamin Johnson NRC Research Associate at the US Naval Research Laboratory, Washington DC, USA S. Kate Devitt Robotics and Autonomous Systems, School of Electrical Engineering and Computer Science, Faculty of Science and Engineering, Institute for Future Environments, Faculty of Law, Queensland University of Technology, Brisbane, Australia Adam Klyne School of Engineering and Information Technology, University of New South Wales, Canberra, Australia Denis Laurandeau Department of Electrical Engineering, Laval University, Quebec City, Canada Michael Lewis Department of Information Sciences, University of Pittsburgh, Pittsburgh, PA, USA Karon MacLean Department of Computer Science, University of British Columbia, Vancouver, Canada Kathryn Merrick School of Engineering and Information Technology, University of New South Wales, Canberra, Australia Risto Miikkulainen Department of Computer Sciences, University of Texas at Austin, Austin, USA xvi Contributors Tim Miller Department of Computing and Information Systems, University of Melbourne, Melbourne, VIC, Australia Adrian R. Pearce Department of Computing and Information Systems, University of Melbourne, Melbourne, VIC, Australia Darryn J. Reid Defence Science and Technology Group, Joint and Operations Analysis Division, Edinburgh, SA, Australia Jason Scholz Defence Science and Technology Group, Joint and Operations Analysis Division, Edinburgh, SA, Australia Sara Sheikholeslami Department of Mechanical Engineering, University of British Columbia, Vancouver, Canada Michael Smithson Research School of Psychology, The Australian National University, Canberra, Australia Liz Sonenberg Department of Computing and Information Systems, University of Melbourne, Melbourne, VIC, Australia Ron Sun Cognitive Sciences Department, Rensselaer Polytechnic Institute, Troy, NY, USA Katia Sycara Robotics Institute School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA Phillip Walker Department of Information Sciences, University of Pittsburgh, Pittsburgh, PA, USA Mark A. Wilson Navy Center for Applied Research in AI, US Naval Research Laboratory, Washington DC, USA Leon D. Young Department of Defence, War Research Centre, Canberra, Australia Contributors xvii Chapter 1 Foundations of Trusted Autonomy: An Introduction Hussein A. Abbass, Jason Scholz and Darryn J. Reid 1.1 Autonomy To aid in understanding the chapters to follow, a general conceptualisation of auton- omy may be useful. Foundationally, autonomy is concerned with an agent that acts in an environment. However, this definition is insufficient for autonomy as it requires persistence (or resilience) to the hardships that the environment acts upon the agent. An agent whose first action ends in its demise would not demonstrate autonomy. The themes of autonomy then include agency, persistence and action. Action may be understood as the utilisation of capability to achieve intent, given awareness. 1 The action trinity of intent, capability and awareness is founded on a mutual tension illustrated in the following figure. If “capability” is defined as anything that changes the agent’s awareness of the world (usually by changing the world), then the error between the agent’s aware- ness and intent drives capability choice in order to reduce that error. Or, expressed compactly, an agent seeks achievable intent. The embodiment of this action trinity in an entity, itself separated from the environ- ment, but existing within it, and interacting with it, is termed an agent, or autonomy, or intelligence. 1 D.A. Lambert, J.B. Scholz, Ubiquitous Command and Control, Intelligent Decision Technolo- gies, Volume 1 Issue 3, July 2007, Pages 157–173, IOS Press Amsterdam, The Netherlands. H. A. Abbass ( B ) School of Engineering and IT, University of New South Wales, Canberra, ACT 2600, Australia e-mail: h.abbass@adfa.edu.au J. Scholz · D. J. Reid Defence Science and Technology Group, Joint and Operations Analysis Division, PO Box 1500, Edinburgh, SA, Australia e-mail: jason.scholz@defence.gov.au D. J. Reid e-mail: darryn.reid@defence.gov.au © The Author(s) 2018 H. A. Abbass et al. (eds.), Foundations of Trusted Autonomy , Studies in Systems, Decision and Control 117, https://doi.org/10.1007/978-3-319-64816-3_1 1 2 H. A. Abbass et al. So it is fitting that Chapter 2 by Tom Everitt and Marcus Hutter opens with the topic Universal Artificial Intelligence (UAI): Practical Agents and Fundamen- tal Challenges. Their definition of UAI involves two computational models: Turing Machines; one representing the agent, and one the environment, with actions by the agent on the environment (capability), actions from the environment on the agent (awareness), and actions from the environment to the agent including a utilisation reward (intent achievement) subject to uncertainty. The “will” that underpins the intent of this agent is “maximisation of reward”. This machine intelligence is express- ible - astoundingly - as a single equation. Named AIXI, it achieves a theoretically- optimal agent in terms of reward maximisation. Though uncomputable, the construct provides a principled approach to considering a practical artificial intelligence and its theoretical limitations. Everitt and Hutter guide us through the development of this theory and the approximations necessary. They then examine the critical question of whether we can trust this machine given machine self-modification, and given the potential for reward counterfeiting, and possible means to manage these. They also consider agent death and self-preservation. Death for this agent involves the cessation of action, and might represented as an absorbing zero reward state. They define both death and suicide, to assess the agent’s self-preservation drive which has implications for autonomous systems safety. UAI provides a fascinating theoretical foundation for an autonomous machine and indicates other definitional paths for future research. In this action trinity of intent, capability, and awareness, it is intent that is in some sense the foremost. Driven by an underlying will to seek utility, survival or other motivation, intent establishes future goals. Chapter 3 Benjamin Johnson, Michael Floyd, Alexandra Coman, Mark Wilson and David Aha consider Goal Reasoning and Trusted Autonomy. Goal Reasoning allows an autonomous system to respond more successfully to unexpected events or changes in the environment. In relation to UAI, the formation of goals and exploration offer the massive benefit of exponen- 1 Foundations of Trusted Autonomy: An Introduction 3 tial improvements in comparison with random exploration. So goals are important computationally to achieve practical systems. They present two different models of Goal Reasoning: Goal-Driven Autonomy and the Goal Lifecycle. They also describe the Situated Decision Process (SDP), which manages and executes goals for a team of autonomous vehicles. The articulation of goals is also important to human trust, as behaviours can be complex and hard to explain, but goals may be easier because behaviour (as capability action on the environment) is driven by goals (and their difference from awareness). Machine reasoning about goals also provides a basis for the “mission command” of machines. That is, the expression of intent from one agent to another, and the expression of a capability (e.g. a plan) in return provides for a higher level of control with the “human-on-the-loop” applied to more machines than would be the case of the “human-in-the-loop”. In this situation, the authors touch on “rebellion”, or refusal of an autonomous system to accept a goal expressed to it. This is an important trust requirement if critical conditions are violated that the machine is aware of, such as the legality of action. The ability to reason with and explain goals (intent) is complemented in Chapter 4 by consideration of reasoning and explanation of planning (capability). Tim Miller, Adrian R. Pearce and Liz Sonenberg examine social planning for trusted autonomy. Social planning is machine planning in which the planning agent maintains and reasons with an explicit model of the humans with which it interacts, including the human’s goals (intent), intentions (in effect their plans or in general capability to act), beliefs (awareness), as well as their potential behaviours. The authors combine recent advances to allow an agent to act in a multi-agent world considering the other agents’ actions, and a Theory of Mind about the other agents’ beliefs together, to provide a tool for social planning. They present a formal model for multi-agent epis- temic planning, and resolve the significant processing that would have been required to solve this if each agent’s perspective were a mode in modal logic, by casting the problem as a non-deterministic planning task for a single agent. Essentially, treat- ing the actions of other agents in the environment as non-deterministic outcomes (with some probability that is not resolved until after the action) of one agents own actions. This approach looks very promising to facilitate computable cooperative and competitive planning in human and machine groups. Considering autonomy as will-driven (e.g. for reward, survival) from Chapter 2, and autonomy as goal-directed and plan-achieving (simplifying computation and explanation) from Chapters 3 and 4, what does autonomy mean in a social context? The US Defense Science board 2 signals the need for a social perspective, it should be made clear that all autonomous systems are supervised by human operators at some level, and autonomous systems’ software embodies the designed limits on the actions and decisions delegated to the computer. Instead of viewing autonomy as an intrinsic property of an unmanned vehicle in isolation, the design and operation of autonomous systems needs to be considered in terms of human-system collaboration. 2 U.S. Defence Science Board, Task Force Report: The Role of Autonomy in DoD Systems, July 2012, pp. 3–5. 4 H. A. Abbass et al. The Defense Science Board report goes on to recommend “that the DoD aban- don the use of ‘levels of autonomy’ and replace them with an autonomous systems reference framework”. Given this need for supervision and eventual human-system collaboration, perhaps a useful conceptualisation for autonomy might borrow from psychology as illustrated in the following figure. Here, a popular definition 3 of ‘autonomy as self-sufficient and self-directed’ is situated in a setting of social maturity and extended to include ‘awareness of self’. Covey 4 popularises a maturity progression from dependence (e.g. on parents) via independence to interdependence. The maladjusted path is progression from depen- dence to co-dependence. Co-dependent agents may function but lack resilience as compromise to one agent affects the other(s) thus directly affecting own survival or utility. For the interdependent agent cut off from communication there is the fall-back state of independence. So, if this might be a preferred trajectory for machine autonomy, what are the implications a strong and independent autonomy? In Chapter 5, Bobby D. Bryant and Risto Miikkulainen consider a neuroevolutionary approach to adaptive multi- agent teams. In their formulation, a similar and significant capability for every agent is posed. They propose a collective where each agent has sufficient breadth of skills to allow for a self-organized division of labour so that it behaves as if it were a hetero- geneous team. This division is dynamic in response to conditions, and composed of autonomous agents occurs without direction from a human operator. Indeed in gen- eral, humans might be members of the team. This potentially allows for massively- scalable resilient autonomous systems with graceful degradation, as losing any agent affects a loss of role(s) which might be taken up by any other agent(s) all of which have requisite skills (capability). Artificial neural networks are used to learn teams with examples given in the construct of strategy games. Furthering the theme of social autonomy in Chapter 6, John Harvey examines both the blessing and curse of emergence in swarm intelligence systems. We might 3 J.M. Bradshaw, The Seven Deadly Myth of Autonomous Systems, IEEE, 2013. 4 S. R. Covey, The Seven Habits of Highly Effective People, Free Press, 1989. 1 Foundations of Trusted Autonomy: An Introduction 5 consider agents composing a swarm intelligence as “similar” and ranging to identical, but not necessarily “significant” capabilities, with the implications that resilience is a property of the collective rather than the individual. Harvey notes that swarm intelligence may relate to a category within the complexity and self-organisation spectrum of emergence characterised as weakly predictable. Swarms do not require centralised control, and may be formed from simple agent interactions, offering the potential for graceful degradation. That is, the loss of some individuals may only weakly degrade the effect of the collective. These and other “blessings” of swarm intelligence presented by the author are tempered by the shortcomings of weak predictability and controllability. Indeed, if they are identical, systematic failure may also be possible as any design fault in an individual is replicated. The author suggests a future direction for research related to the specification of trust properties, might follow from the intersection of liveness properties based on formal methods and safety properties based on Lyapunov measures. Swarm intelligence also brings into question the nature of intelligence. Perhaps it may arise as an emergent property from interacting simpler cognitive elements. If a social goal for autonomy is collaboration, then cooperation and competi- tion (e.g. for resources) is important. Furthermore, interdependent autonomy must include machines capable of social conflict. Conflict exists where there is mutually exclusive intent. That is, if the intent of one agent can only be achieved if the intent of the other is not achieved. Machine agents need to recognise and operate under these conditions. A structured approach to framing competition and conflict is in games. Michael Barlow, in Chapter 7 examines trusted autonomous game play. Bar- low explains four defining traits of games that include a goal (intent), rules (action bounds), a feedback system (awareness), and voluntary participation. Voluntary par- ticipation is an exercise of agency where an agreement to act within those conditions is accepted. Barlow examines both perspectives of autonomy for games and games for autonomy. Autonomous entities are usually termed AIs in games, and may serve a training purpose or just provide an engaging user experience. So, improving AIs may improve human capabilities. Autonomous systems can also benefit from games, as games provide a closed-world construct for machine reasoning and learning about scenarios. These chapters take us on a brief journey of some unique perspectives, from autonomy as individual computational intelligence through to collective machine diversity. 1.2 Trust Trust is a ubiquitous concept. We all have experienced it one way or another, yet it appears to hold many components that we may never converge on a single, precise, and concise definition of the concept. Yet, the massive amount of literature on the topic is evidence that the topic is an important one for scientific inquiry.