The mission of NoLeFa is to support the implementation of the EU AI Act.

What is the EU AI Act

The European Union Artificial Intelligence Act aims to make AI safer through risk-based requirements on governance, testing, and transparency.

The EU’s AI Act was already adopted in June 2024 and establishes a phased regulatory framework for artificial intelligence. This framework will demand AI providers (and in some cases deployers) within the EU to comply with regulations that aim to safeguard our health, safety and fundamental rights.

The implementation of the AI Act will begin from February 2025 – first for prohibited applications - and will culminate in August 2027, when high-risk AI applications (in regulated industries like health, finance, and public administration) that could potentially cause harm to human health and rights will be subject to standards.

Here, you can find out more about the AI Act, where we explain it more in detail.

leiwand.ai is part of the two-year NoLeFa-84 Project, which aims to support the rollout of the EU AI Act by laying the groundwork for AI testing facilities on behalf of the EU.

We are doing so as part of an consortium - led by French national research institute Inria, and with the highly expert partners CAIRNE | LNE | Piccadilly Labs | Numalis.

NoLeFa

These “Union Testing Facilities”, mandated by AI Act Article 84, will play a crucial role in upholding AI safety standards, in part by providing support to market surveillance authorities. Tasked with monitoring AI Act compliance, the market surveillance authorities report annually to the Commission and national competition bodies on potential issues or prohibited practices, and can propose joint measures to ensure compliance and identify violations.

The Roadmap

AI Act Analysis and Harmonised Standards

The project will examine the technical implications of the EU AI Act and translate its principles into practical technical standards. Backed by legal experts and societal stakeholders, it will work with CEN-CENELEC JTC 21 and ISO/IEC to ensure global interoperability, focusing on areas like robustness, transparency, and logging for high-risk AI systems. The project aims to raise awareness, expand Europe’s pool of technical experts, and guide them through AI standardisation, while also contributing to drafting codes of practice for general-purpose AI models during the transition period.

Coordination, advice and training to authorities

The project will create an online collaboration network with national authorities and the AI Office to coordinate efforts for market surveillance. It will organize workshops in 2025 and 2026 to align stakeholders, gather feedback on testing facilities, and share AI Act implementation updates. Additionally, the project will provide independent technical advice, world-class expertise, and specialized training sessions on AI testing, standards, and methods during its second year.

R&D and testing services

The project aims to create a unified suite of testing tools for AI, covering evaluation metrics for NLP, computer vision, bias and robustness assessment, dataset quality checks, AI-specific vulnerabilities, and logging/monitoring functions. These tools will support systematic and reproducible testing for market surveillance, with quarterly GitHub releases starting January 2025 and a stable version expected by fall 2025. The framework will be validated through case studies involving high-risk AI systems provided by volunteer contributors.

Next
Next

The Algorithmic Bias Risk Radar (ABRRA)