RELATED WORKSHOPS >

  • WAISE - Held at SAFECOMP 2018, 2019

  • SafeAI - Held at AAAI 2019, 2020

  • AISafety

CONFERENCE >

IJCAI-19

August 10-16, 2019 | Macao, China

https://www.ijcai19.org/ 

Register here

© 2019 by AISafety.

Prof. Shlomo Zilbertein

Shlomo Zilberstein is Professor of Computer Science and Associate Dean for Research and Engagement at the University of Massachusetts, Amherst. He received his Ph.D. in Computer Science from the University of California, Berkeley. His research focuses on the foundations and applications of resource-bounded reasoning techniques, which allow complex systems to make decisions while coping with uncertainty, missing information, and limited computational resources. He is a Fellow of AAAI and recipient of the University of Massachusetts Chancellor’s Medal, IFAAMAS Influential Paper Award, AAAI Distinguished Service Award, NSF CAREER Award,  and best paper awards from ECAI (1998), AAMAS (2003), IAT(2005), MSDM (2008), ICAPS (2010), and AAAI (2017 Computational Sustainability Track). He is a former Editor-in-Chief of JAIR, President of ICAPS, and Councilor of AAAI.

INVITED TALK: AI Safety Based on Competency Models

AI safety is particularly critical for deployed autonomous systems that may ultimately operate without human supervision such as autonomous vehicles or drones. Establishing the safety of such systems is particularly challenging because they are designed to operate in highly unstructured environments. No matter how much effort goes into the system design and how much data is available for training and testing, we must acknowledge the inherent need for such autonomous systems to operate based on partial, inaccurate models of the environment in which they are situated. There are no complete models of the real world. How can a system be safe when its model of the environment is imperfect?

 

We propose an approach to create competency models of the autonomous system that define how reliable it is in performing its assigned tasks under various conditions. The competency model of an AI system provides objective measures of the system’s efficiency, failure rate, and the risks associated with failure. These measures could be conditioned on state features. For example, an autonomous vehicle could slide when driving on snow-covered roads with some probability that depends on its velocity. An AI system is considered conditionally competent when its human supervisor is satisfied that it should be allowed to operate with no human supervision under some stated conditions. While competency models are subjective, the required level of competence for unsupervised operation is ultimately based on human judgment that is inherently subjective. It is the process of delegating full autonomy to a system based on its established competency, which may require the human supervisor to assume some risk.

 

The key technical questions associated with this approach are: (1) How to identify the conditions for AI safety and acquire the conditional competency models? (2) How to be sufficiently confident that the right conditions are satisfied during autonomous operation? (3) How to involve humans in the operation of autonomous system when the conditions are not met and how to maintain a safe state in the interim? and (4) How to adjust the conditions necessary for safe operation based on human feedback and operation logs?

 

In this talk, I will discuss ongoing work on developing competency models and a range of feedback mechanisms designed to adjust the boundary of safe autonomy. According to this approach, safety is rooted in human approval, allowing a system to operate autonomously—with the implied human’s willingness to assume the risk that something unexpected may happen due to model inaccuracy, sensor limitations, or other factors. Inevitably, human judgement about the system’s safety may not be perfect. Consequently, we complement human authorization with self monitoring, allowing the AI systems to reduce the boundary of autonomous operation when failures are detected in practice or anticipated by its planning and reasoning algorithms. Hence, the feedback that is used to adjust the conditions for safety goes both ways: from human to the AI system and back. Our ultimate goal is to reduce the reliance on humans over time, but only after establishing the necessary level of competency and safety.

AISafety_medium_icon.png