The AI landscape

“…although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI.”,  OECD AI Principles, 2019.

As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is also a prerequisite for its uptake.”, European Commission White Paper on Artificial Intelligence, 2020.

Without this trust, organisations can be reluctant to invest in AI, because of concerns over whether these systems will work as intended (i.e. whether they are effective, accurate, reliable, or safe).”, CDEI roadmap to an effective AI assurance ecosystem, 2021.

Cultivating trust and communication about how to understand and manage the risks of AI  systems will help create opportunities for innovation and realize the full potential of this  technology.”, NIST Draft AI Risk Management Framework, 2022.  

“For the AI assurance system to support a risk-based framework, it needs tools and services catering to different levels of assurance needs.”, TechUK Governance for an AI future: Enabling innovation and securing trust, 2022.

DEVELOPMENTS IN AI ASSURANCE SINCE 2017

THE CURRENT LANDSCAPE

The publication of the European Commission’s proposed Artificial Intelligence Act (AI Act) in April 2021 marked a key milestone in AI governance. Once it enters into force, the Act will be the first general legal framework on AI anywhere in the world. The Commission has expressed hopes that the Act will set the global agenda as a gold standard for future AI regulation. Although the Act uses the language of auditing rather than assurance, it introduces several requirements that businesses will need to operationalise through assurance tools and services. 

Meanwhile, the UK government is soon expected to publish its much-anticipated White Paper which will set out its own regulatory approach. This follows the recent publication of its roadmap for an effective AI assurance ecosystem. In the United States, regulatory efforts have so far been more hands-off, with an agency-by-agency approach rather than general regulation. In Japan, the Ministry of Economy, Trade, and Industry (METI) has recently updated its voluntary guidelines on AI governance. 

THE EU AI ACT 

The draft Act lays down rules for AI systems which are developed or used in the EU. It takes a risk-based regulatory approach which distinguishes between AI systems deemed to present unacceptable risks – which are prohibited altogether – and systems carrying different levels of acceptable risk which are permitted. For “high-risk” systems, a range of mandatory requirements (including a conformity assessment) must be met before a system can be placed on the market. 

When it comes to assurance, these systems categorised as high-risk will carry significant obligations. Unless already covered by existing rules, a high-risk AI system will need to undergo conformity assessment, and registration in an EU-wide database must also be completed. After this, providers must establish post-market monitoring plans to document the performance of systems throughout their lifetimes. 

As stated in the draft Act itself, standards will play a key role in providing the technical solutions needed to ensure compliance, and will be a means for providers to demonstrate conformity. Although many relevant standards already exist, new harmonised EU standards will be developed. The Act will also complement existing and forthcoming EU safety regulation. 

At present, the AI Act remains a proposal. It will be the subject of complex legislative negotiations, and is not expected to be finalised before 2023. The text is now being discussed and amended by the Council, and in the Parliament, which recently published a draft report by the two MEPs leading on the legislation. Once the Act is finalised, there will be a transition period of 24 months. 

AI STANDARDISATION 

While regulation sets out what business must do to meet legal requirements, standards provide technical specifications detailing how they can comply and demonstrate best practice. Unlike legislation, they are voluntary and developed by consensus in independent organisations. As well as product standards, international standards can take the form of test methods, codes of practice, guideline standards and management systems standards. They will play an increasingly crucial role in the adoption and effective governance of AI. 

The first international standards committee to look at the entire AI ecosystem (ISO/IEC JTC 1/SC 42) was established in early 2018, and has so far published 11 standards, with a further 26 under development. The committee’s work is based on five working groups covering different themes. Some of these standards will be foundational in setting agreed terminology and good practice. 

Other bodies have also been developing standards around AI. These include the  IEEE’s working group on ethical issues in AI, P7000, as well as the European Standards Organisations ETSI, CEN and CENELEC. The latter two have established a joint technical committee which will identify and adopt international standards already available or under from other organisations, including ISO/IEC JTC 1/SC 42. 

At the national level, standardisation initiatives have been developed in countries including China, the United States, and the UK. The UK has recently launched an AI Standards Hub which will aim to bring the AI community together and create tools and materials to help organisations to develop and benefit from standards. 

THE FUTURE OF AI ASSURANCE

With further developments on the horizon, organisations will need to keep up with and be prepared to operationalise new rules and standards. As well as fulfilling legal requirements, assurance enables businesses to demonstrate trustworthiness to stakeholders. Building this trust is essential for promoting the adoption and unleashing the benefits of AI across the economy. 

In the UK, the EU and elsewhere, AI assurance ecosystems remain underdeveloped. As part of its AI assurance roadmap, the UK government’s Centre for Data Ethics and Innovation (CDEI) has developed a guide to AI assurance which sets out the possible techniques and methods which could be used. 

The AI Assurance Club brings together the organisations and people best equipped to respond to these challenges. Our members have broad expertise and share our goal of shaping and understanding the emerging AI assurance ecosystem.