ABOUT US

This initiative aims at defining an AI safety landscape providing a “view” of the current needs, challenges and state of the art and the practice of this field, as a key step towards developing an AI Safety body of knowledge.

CONTEXT

In the last decade, there has been a growing concern on risks of Artificial Intelligence (AI). Safety is becoming increasingly relevant as humans are progressively ruled out from the decision/control loop of intelligent, and learning-enabled machines. In particular, the technical foundations and assumptions on which traditional safety engineering principles are based, are inadequate for systems in which AI algorithms, in particular Machine Learning (ML) algorithms, are interacting with the physical world at increasingly higher levels of autonomy. We must also consider the connection between the safety challenges posed by present-day AI systems, and more forward-looking research focused on more capable future AI systems, up to and including Artificial General Intelligence (AGI).

MISSION

The main goal of the AI Safety Landscape is to bring together the most relevant initiatives and leaders interested in developing a map of AI Safety knowledge to seek consensus in structuring and outlining a generally acceptable landscape for AI Safety. One important ambition of this initiative is to align and synchronize the proposed activities and outcomes with other related initiatives. Together with them, we expect to potentially evolve this landscape towards a more formal form, such as a body of knowledge.

 

The Consortium on the Landscape of AI Safety (CLAIS) is a global not-for-profit organisation which oversees the production and use of the AI Safety Landscape.

Why do we need an AI Safety Landscape?

Despite the increasing number of researchers and practitioners worldwide working on AI Safety and the ubiquitous need of safe intelligent autonomous systems in our society, this field was only relatively recently recognized as a legitimate domain that is stretching the limits of the broader and more traditional discipline of safety engineering.

 

This initiative aims at defining an AI safety landscape providing a “view” of the current needs, challenges and state of the art and the practice of this field, as a key step towards developing an AI Safety body of knowledge. Recognizing the need of an AI Safety Landscape, is pivotal because of the following reasons:

  • More consensus is crucial: Achieving more consensus in terminology and meaning is a key step towards aligning the understanding of engineering and socio-technical concepts, existing/available theory and technical solutions and gaps in the diversity of AI safety. Increasing conceptual consensus has the power of accelerating the mutual understanding of the multiple disciplines working on how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as ensuring awareness of broader strategic, ethical and policy issues. Also in any consensus there are many trade-offs and compromises we must make.

  • Focus on generally accepted knowledge: "Generally accepted" means that the knowledge described is applicable to most AI Safety problems, by still expecting that some considerations will be more relevant to certain applications or algorithms. We also expect to be somewhat forward-looking in the different interpretations by taking into consideration not only what is generally accepted today, but we expect will be generally accepted in a longer timeframe, with the dawn of systems whose cognitive capabilities approach those of humans.

ABOUT US

The Consortium on the Landscape of AI Safety (CLAIS) is a global not-for-profit organisation which oversees the production and use of an AI safety "view" of the current needs, challenges and state of the art and the practice of this field, as a key step towards developing an AI Safety body of knowledge.

CONTACT
  • Twitter - círculo blanco
  • LinkedIn - círculo blanco
SUBSCRIBE FOR EMAILS

© 2020 by CLAIS (Consortium on the Landscape of AI Safety)