ALAIT

Austrian Lab for AI Trust

The Mission

The Austrian Lab for AI Trust (ALAIT) seeks to strengthen society's trust in AI through transparency and information. ALAIT strives to enable key societal groups to adopt AI technologies responsibly together, by setting ethical and high-quality standards for their use.

ALAIT is an R&D project, commissioned by the Federal Ministry Innovation, Mobility and Infrastructure, which will be running until 2027.

Publications from the project can be found on the project homepage: https://science.apa.at/project/alait/

Impact and Benefits


ALAIT empowers key groups of actors in society to deal with AI technologies in an informed manner and to develop socio-technical norms for the ethical and high-quality use of AI in their respective industries and areas. The project therefore contributes in important dimensions to building trust in AI:

Transparency: in the sense of accessibility to compact, comprehensive information about AI technologies and their effects

Governance: for the use of AI and its direct embedding and translation of standards into society

AI literacy: in the sense of acquiring knowledge and skills for dealing with current technologies

ALAIT stands on two central pillars:

  1. Developing a New AI Technology Assessment Method
    ALAIT introduces a novel, scientifically developed method for evaluating AI technologies in real-world contexts. The so called ALAIT Technology Impact Assessment is an opportunity-risk assessment for AI technologies, expanding on traditional Technology Assessments (TA). ALAIT’s new approach specifically addresses the unique and complex challenge of assessing AI systems in very specific fields of application that are highly relevant to society.

  2. Initiating and promoting dialogue with relevant societal groups
    In line with participatory design principles, ALAIT is establishing a workshop format called ALAIT laboratory. Its goal is to foster dialogue, feedback loops, and collaborative evaluation processes with key stakeholders. The setting is designed to build public trust in AI through inclusive, transparent engagement. ALAIT laboratory is open for all interested industries.

  • ALAIT is developing a new scientific method for the technology assessment of AI-based technologies, by testing them through implementation. The very feature of this opportunity-risk assessment for AI technologies is its direct deployment on certain areas of application, such as the health sector, social media or public administration.

  • AI Trust Dossiers present the analyses of selected AI technologies in specific areas of application, using the ALAIT Technology Impact Assessment. Each dossier summarizes the findings in a scientifically reliable and easy-to-understand format.

    Published Dossiers can be found on the ALAIT homepage: https://science.apa.at/project/alait/

  • The ALAIT Laboratory is a workshop format designed to initiate and foster dialogue and knowledge sharing on artificial intelligence with key societal stakeholders. It focuses on discussing the opportunities and risks of AI in specific industries and their respective areas of application, building trust, and developing insights into socio-technical evaluation.

    Two pilot workshops will be implemented. The knowledge and materials from these workshops will be further expanded via the Train-the-Trainer Network (see the following point).

  • To ensure the ALAIT Laboratory workshop format reaches a wide audience, a train-the-trainer program is being developed. This program will equip multipliers with the necessary skills and materials, which they will then share across a network of interested organizations.

A Quick Rundown of What We’re Building

Led by Experts

The project, led by winnovation consulting gmbh, is a collaborative effort with leiwand.ai, Technische Universität Wien, and APA – Austria Presse Agentur and supported by an advisory board of high-profile experts.

The Project Advisory Board, bringing together a stellar group of experts from science, business, society, and politics, is supporting ALAIT’s success in fostering public trust in artificial intelligence.

We are thrilled to have the following distinguished members on board:
🔹 Peter Biegelbauer (AIT Austrian Institute of Technology)
🔹 Katja Bühler (VRVis GmbH)
🔹 Leonhard Dobusch (Universität Innsbruck)
🔹 Laura Drechsler (KU Leuven)
🔹 Philipp Kellmeyer (Universität Mannheim)
🔹 Marta Sabou (WU (Wirtschaftsuniversität Wien)
🔹 Karin Sommer (Wirtschaftskammer Österreich)

Their combined expertise will guide us in tackling the challenges of AI governance, transparency, and literacy while promoting dialogue and responsible use of AI in diverse industries.

© Fotos from UnSplash and ALAIT

Previous
Previous

The Algorithmic Bias Risk Radar (ABRRA)

Next
Next

When AI Discriminates