top of page

AI Risk Management

  • sujosutech
  • Apr 2
  • 6 min read

The proliferation of Artificial Intelligence (AI) enabled systems is giving rise to newer challenges for enterprises. The threat landscape is evolving and threat actors are using AI to launch innovative attacks on enterprise infrastructure, applications and networks. Darktrace stated that 74% of IT security professionals have reported that their organizations are suffering significantly from AI-powered threats. Such threats are causing severe cybersecurity and data privacy risks for enterprises. Hence, a comprehensive strategy needs to be formulated to manage the risks of AI-enabled systems.

In this article, we look at the various types of risks of AI-enabled systems and the important risk management and legal frameworks that have been formulated to counter those risks.



AI Risks

AI-powered risks and the risks of using AI systems can be categorized as follows:

  • Cybersecurity risks – AI can be used to launch sophisticated and targeted cyberattacks, posing significant threats to enterprises. AI systems are themselves vulnerable to adversarial attacks, where threat actors can manipulate them to take incorrect decisions. Besides, AI systems can be targets for data breaches that may divulge confidential information and cause serious harm. Examples include model poisoning, hallucination, prompt injection and prompt denial of service attacks.

  • Privacy risks AI systems usually rely on vast amounts of personal data, raising concerns about violations of personally identifiable information (PII). AI-powered surveillance systems can be used to track and monitor the activities of individuals; this leads to serious concerns about privacy and freedom.

  • Ethical risks An ethical risk for an AI system is any risk associated with it that may cause stakeholders in the system to fail one or more of their ethical responsibilities towards other stakeholders. AI systems can take decisions that are unfair or discriminatory, leading to negative consequences for individuals and groups. AI systems can propagate existing societal biases that may be present in their training data set, leading to discriminatory outcomes in areas like hiring, loan applications and even criminal justice.

  • Societal risks AI algorithms can be used to target individuals with personalized content and messaging, potentially leading to manipulation and polarization. AI can be used to create realistic but fake videos and audio, which can be used for manipulation and misinformation campaigns.

  • Legal risks  Use of AI-enabled systems can lead to several legal risks. These include litigations arising out of data privacy breaches, algorithmic bias and discrimination, intellectual property issues, AI-generated errors, and lack of transparency and accountability. The rapid development of AI technology outpaces the development of regulations, creating uncertainty and potential legal risks for businesses.

  • Existential risks  The potential for AI systems to become uncontrollable could pose an existential threat to mankind. The development of autonomous weapons systems also raises serious survival concerns.



Standardization organizations like ISO and NIST have published risk management standards and frameworks that can be adopted by organizations to counter the risks posed by AI-enabled systems. Besides, several countries have formulated AI-specific laws and regulations that encompass the legal and ethical guidelines that organizations and developers must follow when working with AI systems.

 

NIST AI RMF (AI 100-1)

In January 2023, the National Institute of Standards and Technology (NIST) published an AI Risk Management Framework (AI RMF) to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. It comprises of the AI RMF Core that provides outcomes and actions to enable dialogue, understanding and activities to manage AI risks. The Core is composed of four functions, namely GOVERN, MAP, MEASURE, and MANAGE. These functions seek to ensure the following:

  • Govern  requisite systems, processes and tools are developed across organizational contexts to cultivate and sustain a culture of risk management;

  • Map – risk profiles of AI systems are identified and contextualised to the use-cases they are deployed in;

  • Measure  risks are effectively assessed, measured and tracked; and

  • Manage  risks are prioritised and addressed proactively.

 

ISO/IEC 23894

ISO/IEC 23894 was published in 2023 and offers strategic guidance to organizations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organizations can integrate risk management into their AI-driven activities and business functions. The document comprises of three main parts:

  • Principles  This describes the underlying principles of risk management. The use of AI requires specific considerations with regard to some of these principles as described in Clause 4 of ISO 31000:2018.

  • Framework – The risk management framework assists an organization in integrating risk management into its activities and functions. Aspects specific to the development, provisioning or offering, or use of AI systems are described in Clause 5 of ISO 31000:2018.

  • Processes  Risk management processes involve the systematic application of policies, procedures and practices to the activities of communicating and consulting, establishing the context, and assessing, treating, monitoring, reviewing, recording and reporting risk. A specialization of such processes to AI is described in Clause 6 of ISO 31000:2018.


MITRE’s Sensible Regulatory Framework for AI Security

MITRE has provided a deep technical view of AI risks, focusing on specific attack tactics and proposing AI regulations to mitigate these threats. The framework is especially relevant for heavily regulated industries like finance and healthcare. It explores potential options for AI regulations and makes recommendations on how to establish guardrails to shape the development and use of AI.


EU AI Act

The EU AI Act was implemented in August 2024. It is a comprehensive regulation that establishes a unified framework for AI across the European Union, aiming to ensure safe, transparent, and trustworthy AI systems while fostering innovation and respecting fundamental rights. It prohibits some usage of AI and implements strict governance, risk management and transparency requirements for others. The Act uses a risk-based approach to regulate AI, categorizing AI systems based on their potential impact and applying different requirements accordingly.


Executive Order 14110

This was signed by former U.S. President Joe Biden on October 30, 2023. It outlines a comprehensive framework for the safe, secure and trustworthy development and use of AI within federal agencies and across US.


AI Bill of Rights (Blueprint)

The US government has proposed a Blueprint for an AI Bill of Rights, outlining principles for responsible AI development and use. It is a non-binding framework published by the White House Office of Science and Technology Policy (OSTP) outlining five principles that focus on protecting the American public’s rights and promoting democratic values. The principles are as follows:

Safe and effective systems

Algorithmic discrimination protections

Data privacy

Notice and explanation

Human alternatives, consideration and fall-back


China’s AI Regulations

China has implemented technology-specific AI regulations, focusing on recommender systems, Deepfakes, and generative AI. The regulations aim to address risks related to AI and introduce compliance obligations on entities engaged in AI-related business.


India’s AI Initiatives

India emphasizes the concept of “AI for All” and is developing a Draft National Data Governance Framework Policy. The policy aims to ensure that non-personal data and anonymized data from both Government and private entities are safely accessible by research and innovation eco-system. The policy aims to provide an institutional framework for data / datasets / metadata rules, standards, guidelines and protocols for sharing of non-personal data sets while ensuring privacy, security and trust.


How Sujosu Technology Can Help

Sujosu Technology helps organizations design and implement systems that prioritize cyber security, data privacy and compliance. Our services include:

  • Risk Assessments: Identifying cyber security and privacy requirements and vulnerabilities in applications and infrastructure. Our AI risk assessment constitutes a thorough and rigorous process where all AI models, systems and capabilities deployed within an organization are evaluated to identify and mitigate any potential risks across different domains, such as security, privacy, fairness and accountability.

  • Countermeasures and Solutions: Providing tailored strategies to prevent, detect, and recover from potential attacks.

  • Compliance Documentation: Helping you comply with the requirements of specific standards and regulations by compiling policies, procedures, and other relevant manuals.

  • Training and Awareness: Equipping your team with the knowledge to address cyber security and privacy challenges effectively.

With Sujosu Technology’s expertise, your organization can build systems that are secure and resilient against security and privacy breaches. We can also help you achieve compliance with relevant standards and legislations.

 

Partner with Sujosu Technology

Protect your data and ensure compliance with Sujosu Technology’s state-of-the-art cyber security and privacy services. Stay ahead of challenges and foster trust with your stakeholders.

 

Comments


bottom of page