WHY DO WE NEED AN AI SAFETY LANDSCAPE?
Despite the increasing number of researchers and practitioners worldwide working on AI Safety and the ubiquitous need of safe intelligent autonomous systems in our society, this field was only relatively recently recognized as a legitimate domain that is stretching the limits of the broader and more traditional discipline of safety engineering.
This initiative aims at defining an AI safety landscape providing a “view” of the current needs, challenges and state of the art and the practice of this field, as a key step towards developing an AI Safety body of knowledge. Recognizing the need of an AI Safety Landscape, is pivotal because of the following reasons:
More consensus is crucial: Achieving more consensus in terminology and meaning is a key step towards aligning the understanding of engineering and socio-technical concepts, existing/available theory and technical solutions and gaps in the diversity of AI safety. Increasing conceptual consensus has the power of accelerating the mutual understanding of the multiple disciplines working on how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as ensuring awareness of broader strategic, ethical and policy issues. Also in any consensus there are many trade-offs and compromises we must make.
Focus on generally accepted knowledge: "Generally accepted" means that the knowledge described is applicable to most AI Safety problems, by still expecting that some considerations will be more relevant to certain applications or algorithms. We also expect to be somewhat forward-looking in the different interpretations by taking into consideration not only what is generally accepted today, but we expect will be generally accepted in a longer timeframe, with the dawn of systems whose cognitive capabilities approach those of humans.
Towards an AI Safety Landscape
WHAT CONCRETE OUTCOMES DO WE TARGET?
The main goal of this initiative is to bring together the most relevant initiatives and leaders interested on developing a map of AI Safety knowledge to seek consensus in structuring and outlining a generally acceptable landscape for AI Safety.
The core expected outcome is a single document identifying and describing a landscape of AI Safety, the set of subfields that must be knowledgeable not only in the engineering discipline but also in other socio-technical disciplines, including an outline of needs, challenges, practices and gaps. The goal of this initiative is not to inventory everything related to AI Safety, but the core knowledge.
One important ambition of this initiative is to align and synchronize the proposed activities and outcomes with other related initiatives. Together with them, we expect to potentially evolve this landscape towards a more formal form, such as a body of knowledge.
As a starting point for an efficient discussion, we propose a preliminary set of landscape Categories, which could be refined during the process of consensus and development of an AI Safety Landscape.