RELATED WORKSHOPS >

  • WAISE - Held at SAFECOMP 2018, 2019

  • SafeAI - Held at AAAI 2019, 2020

  • AISafety

CONFERENCE >

IJCAI-19

August 10-16, 2019 | Macao, China

https://www.ijcai19.org/ 

Register here

© 2019 by AISafety.

Prof. Raja Chatila

Raja Chatila, IEEE Fellow, is Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne Université in Paris. He is director of the SMART Laboratory of Excellence on Human-Machine Interactions and former director of the Institute of Intelligent Systems and Robotics.


He contributed in several areas of Artificial Intelligence and autonomous and interactive Robotics along his career and is author of about 160 publications. He is recipient of the IEEE Robotics and Automation Society Pioneer Award.


He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, member of the High-Level Expert Group on AI with the European Commission, and member of the Commission on the Ethics of Research in Digital Science and Technology (CERNA) in France.

Invited Talk: Towards Trustworthy Autonomous and Intelligent Systems

Computerized technical systems, used in critical applications such as aviation or power grid management, must be trustworthy to reliably deliver the expected correct service. The academic and industrial communities developing software-based systems have produced several techniques to achieve their dependability or resilience of which safety is a major attribute. Software validation and verification techniques, such as error detection and recovery mechanisms, model checking, detection of incorrect or incomplete system knowledge, and resilience to unexpected changes due to environment or system dynamics, have been developed and used.


However, as decisions usually devoted to humans are being more and more delegated to machines, sometimes running computational algorithms based on learning techniques using data, operating in complex and evolving environments, new issues have to be considered. Should the AI “black-box” justify moving away from procedures that guarantee a trusted operation of the system? This is both an ethical and a technical question. Key features such as transparency, explainability and accountability take more importance. What technical and non-technical new measures should be taken then in the design process and in the governance of these systems?

AISafety_medium_icon.png