As the World Economic Forum starts in Davos this week, the British Standards Institute has published a new international standard for safely managing AI. BS ISO/IEC 42001 is intended to guide organisations looking to develop and deploy AI technology. The publication aims to close the AI confidence gap.
The standard provides a framework within which AI products can be developed responsibly. The framework aligns with the work done by the ISO/IEC JTC 1/SC 42 working group in which BSI represents the UK’s interests.
Why create the AI Management standard
BSI declares that the overarching purpose of BS ISO/IEC 42001 is to establish competence and confidence so that both businesses and society at large can maximize the benefits derived from AI technology. It cites several benefits that companies can gain by using the standard within their operations.
- Helps build trust in AI systems developed
- Effective in improving the quality, security, traceability, transparency and reliability of AI applications
- Accelerates development of Applications
- Reduces the cost of development
- It simplifies the compliance challenges and helps with adherence to them
- Helps organisations meet customer, staff and stakeholder expectations
Closing the trust gap is important. A recent BSI Trust in AI Poll surveyed 10,000 adults across nine countries. It highlighted the gap in trust. Within the UK, 62% wanted international guidelines to enable the safe use of AI.
The stable door is already open, with 38% of people already using AI every day. However, with 62% expecting to do so in 2030, there is still time to act.
Susan Taylor Martin, CEO of BSI, said, “AI is a transformational technology. For it to be a powerful force for good, trust is critical. The publication of the first international AI management system standard is an important step in empowering organizations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”
Creating Guardrails for the UK with BS ISO/IEC 42001
The guidance sets out how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards. The impact-based framework provides the requirements to facilitate context-based AI risk assessments, with details on risk treatments and controls for internal and external AI products and services.
The guidance provides information in the following areas:
- Context of the organisation
- Performance evaluation
The standard provides guidance, guardrails and process information that help organisations to better manage their AI developments. The standard is also referred to in the UK Government’s National AI Strategy, where the report states, “Domestically, the government has established a strategic coordination initiative with the British Standards Institution (BSI) and the National Physical Laboratory to explore ways to step up the UK’s engagement in global standards developing organisations.”
Scott Steedman, Director General, Standards, BSI said, “AI technologies are being widely used by organizations in the UK despite the lack of an established regulatory framework. While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. In this fast-moving space, BSI is pleased to announce publication of the latest international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.
“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy. The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”
The standard is available here and costs £187.00 to download or £93.50 for members.
Enterprise Times: What does this mean
The announcement is timely with Artificial Intelligence one of the four key themes at the World Economic Forum this year. An understanding of the standard at least would seem a sensible approach, even if not rigorously adhered to. However, with an increasing amount coming into force such as the EU AI Act, organisations must seriously consider how they can navigate the sometimes complex road to compliance.
Adhering to the BSI standard will be a good first step on that journey. If adhering to the standard also reduces costs, it may be money and time worth spending as well.