top of page
Search

Navigating the Artificial Intelligence (AI) Risk Landscape: A Guide to AI Risks and and AI Risk Mitigation Strategies

Updated: May 28

The Importance of Effective AI Risk Management
The Importance of Effective AI Risk Management

Introduction

As artificial intelligence (AI) rapidly integrates into various business operations, its expansive adoption brings significant potential benefits, such as increased efficiency, innovation, and competitive advantages. However, alongside these opportunities, AI introduces considerable risks that organizations must diligently manage to avoid financial losses, regulatory penalties, reputational harm, and ethical concerns (European Commission, 2023). These risks include biases embedded in algorithms, privacy violations, transparency issues, and operational disruptions, each potentially leading to severe real-world consequences (NIST, 2023).


Organizations can maintain trust, compliance, and ethical integrity in their AI deployments by proactively addressing these challenges through effective risk management frameworks, such as ISO/IEC 42001:2023 and the NIST AI Risk Management Framework (ISO, 2023; OECD, 2022). In the following sections, we delve deeply into each category of AI risks, examining real-world examples and presenting robust mitigation techniques to guide strategic planning and operational practices effectively.


This article addresses the critical importance of recognizing, evaluating, and proactively managing AI-related risks. It provides readers with a comprehensive overview of the significant categories of AI risks, detailed through practical insights and guidance on established and emerging risk mitigation strategies. Understanding these risks is essential for privacy professionals and decision-makers who aim to leverage AI's capabilities safely and responsibly.


Key Terms:

  • Algorithmic Bias: Systematic errors or discriminatory outcomes resulting from biases embedded within AI algorithms or training data.

  • Artificial Intelligence (AI): Technology enabling machines or software to perform tasks that require human intelligence.

  • Ethical Integrity: Ensuring AI systems and processes align with ethical principles and standards.

  • ISO/IEC 42001: An international standard outlining guidelines for managing AI risks effectively and systematically.

  • National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF): The National Institute of Standards and Technology provides a structured approach to identifying, measuring, and managing AI-related risks.

  • Operational Disruptions: Interruptions or failures in AI systems that significantly impact organizational processes.

  • Privacy Violations: Unauthorized access, misuse, or exposure of personal and sensitive data managed by AI systems.

  • Regulatory Penalties: Legal or financial penalties imposed due to non-compliance with AI technology regulations.

  • Risk Mitigation: Strategies and actions taken to reduce or eliminate risks associated with AI technologies.

  • Transparency: The clarity and openness regarding how AI systems make decisions and operate.


Global Laws and Regulations Governing AI Risk Management

As AI technology rapidly transforms economies and governance models, countries are racing to implement legislative and regulatory frameworks that promote innovation while managing risk. The following outlines notable national and international initiatives to ensure ethical, secure, and transparent use of AI systems.


  • Brazil: Artificial Intelligence Legal Framework (Bill No. 2338/2023): In December 2024, Brazil’s Federal Senate approved Bill No. 2338/2023, establishing a risk-based regulatory framework for AI systems. As of May 2025, the bill is currently under deliberation in the Chamber of Deputies. The bill:

    • Classifies AI applications into risk categories,

    • Imposes stricter obligations on high-risk systems,

    • Requires transparency, human oversight, and algorithmic audits, and

    • Aligns with international AI ethics principles.

    The legislation also emphasizes the protection of personal data and individual rights under AI influence (Data PrivacyBR Research, 2024).


  • China: Interim Measures for Generative AI Services (2023): Effective August 2023, China’s Cyberspace Administration implemented the “Interim Measures for the Management of Generative AI Services.’) These rules:

    • Require AI providers to align content with socialist values,

    • Mandate labelling of synthetic content,

    • Prohibit harmful or discriminatory outputs, and

    • Enforce algorithmic audits and data security protocols.

    • These efforts reflect China’s goal of tightly coupling AI governance with national values and digital sovereignty (China Law Translate, 2023).


  • European Union: AI Act (2024): Adopted in 2024, the EU AI Act is the most comprehensive regulation globally. It:

    • Classifies AI into four risk tiers: unacceptable, high, limited, and minimal,

    • Bans social scoring and real-time facial recognition in public,

    • Imposes stringent requirements on high-risk systems, such as transparency, human oversight, and conformity assessments, and

    • Includes obligations for importers and distributors of AI systems within the EU.

    • This landmark law sets a global benchmark for responsible AI development (European Commission, 2023a; European Parliament, 2023b). Other nations are enacting their own AI laws and regulations. AI governance and regulation will remain key tenets of these laws and regulations for years.


  • United States: State AI Laws: While the U.S. lacks a national AI law, key federal and state actions provide a foundation for AI risk governance:

    • California: Enacted AI accountability and data privacy legislation focused on transparency and discrimination audits,

    • Colorado: Passed one of the nation’s most detailed AI laws addressing bias in high-impact decision systems, and

    • Utah: Introduced laws requiring AI disclosures in automated interactions and algorithmic audits in government use.

    • Note: These initiatives reflect a decentralized but growing trend toward AI regulation in the U.S. (White & Case, 2025).


AI Governance and Risk Management Frameworks

As artificial intelligence technologies evolve, organizations must adopt structured, accountable frameworks to manage associated risks, ensure legal compliance, and uphold public trust. The following internationally recognized frameworks offer comprehensive guidance for ethical and responsible AI governance:


  • AI Verify Framework: Developed by Singapore’s Infocomm Media Development Authority (IMDA), the AI Verify Framework is a modular toolkit that allows organizations to assess and validate the trustworthiness of AI systems. It integrates:

    • Technical Testing: Assesses fairness, robustness, and data integrity,

    • Process Checks: The evaluation of governance structures and operational controls, and

    • Conformance Metrics: The standardization of indicators for transparency and accountability.

    • Note: AI Verify supports internal assessments and third-party evaluations, contributing to global interoperability in AI governance and aligning with national policies like the Model AI Governance Framework and the Book of Knowledge 2.0 (Smart Nation Singapore, 2023).


  • ISO/IEC 42001:2023 – Artificial Intelligence Risk Management Framework (ISO/IEC AI RMF): ISO/IEC 42001:2023 is the first international standard to manage AI systems. It guides organizations through establishing, implementing, and continuously improving an artificial information management system. Core elements include:

    • Leadership & Planning: Emphasizes management commitment and proactive planning.

    • Operational Oversight: Mandates internal audits, performance monitoring, and documented accountability.

    • Plan-Do-Check-Act (PDCA) Structure: Aligns with the PDCA cycle for systematic risk control.

    • Note: The framework addresses AI-specific concerns such as bias, data misuse, transparency gaps, and ethical design. Adoption supports global regulatory alignment (e.g., EU AI Act), enhances trust, and can provide a competitive edge through certification (ISO, 2023).


  • Model AI Governance Framework: Published by the Personal Data Protection Commission (PDPC), Singapore’s Model AI Governance Framework offers industry-agnostic guidance for deploying AI responsibly. Key pillars include:

    • Explainability: Ensures users understand decisions made by AI,

    • Transparency: Requires openness about system design and function,

    • Fairness: Advocates measures to detect and reduce algorithmic bias, and

    • Human-Centricity: Prioritizes user autonomy and ethical alignment.

    • Note: Supplemented by AI Ethics & Governance BoK 2.0, the framework provides practical implementation roadmaps and complements technical standards like AI Verify (PDPC, 2025).


  • National AI Strategy 2.0 and Model AI Framework: In December 2023, Singapore’s National AI Strategy 2.0 outlined fifteen actions to drive systemic AI integration and global collaboration. They include:

    • AI innovation in healthcare, education, and logistics,

    • Investment exceeding SGD 1 billion in AI infrastructure, and

    • A national testing and assurance framework via AI Verify.

    • Note: Singapore’s Model AI Governance Framework, supported by the AI Ethics & Governance Body of Knowledge (BoK) 2.0, guides organizations in applying AI responsibly through principles of transparency, fairness, and human centricity (PDPC, 2024; Smart Nation Singapore, 2024).


  • National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) – United States: Created by NIST, the NIST AI RMF presents a flexible, lifecycle-based framework for managing AI risks across four core functions:

    • Govern: Establishes policies, roles, and governance structure,

    • Map: Defines context, intended use, and potential impacts of AI systems,

    • Measure: Quantifies risks, biases, and performance gaps, and

    • Manage: Guides risk mitigation, monitoring, and improvement.

    • Note: The NIST AI RMF helps organizations foster trustworthy AI systems and is especially useful in regulated environments such as finance, healthcare, and national infrastructure (NIST, 2023).


  • OECD AI Principles: The OECD AI Principles, endorsed by over forty countries, offer high-level, non-binding guidance for the responsible development and use of AI technologies. They emphasize:

    • Human Rights & Fairness: Ensuring AI respects fundamental freedoms and dignity,

    • Robustness & Safety: Promoting technically resilient and secure systems, and

    • Transparency & Accountability: Encouraging clarity in AI processes and responsibility for outcomes.

    • Note: These principles are the ethical foundation for legislative efforts like the EU AI Act and UNESCO's AI ethics recommendations (European Commission, 2023a; European Parliament, 2023; UNESCO, 2024).


  • UNESCO Recommendation on the Ethics of Artificial Intelligence: Adopted in 2021 by all 193 UNESCO member states, the Recommendation is the first global normative framework for AI ethics. It promotes the ethical development and use of AI systems by embedding values-based governance across the AI lifecycle through four key pillars:

    • Accountability and Transparency: Promotes human oversight, explainability, and precise mechanisms for redress, enabling trust and responsible AI governance,

    • Environmental and Societal Well-being: Advocates for sustainable AI practices, emphasizing environmental protection, energy efficiency, and the social good of communities,

    • Human Rights and Dignity: Embeds international human rights standards in AI design and deployment, ensuring that AI technologies respect individual autonomy and freedoms, and

    • Inclusiveness and Non-Discrimination: Ensures fairness, gender equality, cultural and linguistic diversity, and the prevention of algorithmic bias in AI systems.

    • Note: The UNESCO AI Ethics Recommendation supports national policy development, ethical impact assessments, and international cooperation, providing a globally aligned framework to guide responsible AI practices in public and private sectors (UNESCO, 2024). These frameworks give the organizations adaptable, scalable pathways to govern AI systems responsibly and transparently. Each offers strengths suited to different organizational sizes, sectors, and jurisdictional requirements.


Categories of AI Risks:

Understanding the various categories of artificial intelligence (AI) risks is essential for organizations seeking to integrate AI responsibly and effectively into their operational frameworks. Each category addresses areas where AI implementations can negatively impact stakeholders, organizational integrity, and regulatory compliance. A comprehensive awareness of these risks enables data privacy professionals, business leaders, and policymakers to adopt targeted risk management strategies and frameworks (NIST, 2023).


  • Bias and Fairness Risks: Bias in AI occurs when unintended prejudices are embedded into algorithms or training data. This can result in unfair or discriminatory outcomes, disproportionately affecting specific groups based on race, gender, or socioeconomic status. Addressing bias is crucial to ensuring fairness and ethical AI deployment, preserving public trust, and complying with anti-discrimination regulations (European Commission, 2023a). For instance, facial recognition systems have repeatedly demonstrated biases that disproportionately misidentify minorities, resulting in societal harm and legal scrutiny (Buolamwini & Gebru, 2018).


  • Operational Risks: Operational risks associated with AI systems include technical failures, system outages, and performance disruptions. Such risks can severely affect organizational operations, causing financial losses and diminishing consumer confidence. Proactive management of operational risks through rigorous testing, regular system audits, and robust contingency planning can significantly reduce potential disruptions (ISO, 2023).


  • Privacy and Data Security: AI systems often process vast amounts of personal and sensitive information, posing significant privacy and data security risks. Unauthorized data access, breaches, or misuse can lead to severe financial penalties, loss of consumer trust, and regulatory challenges, particularly under stringent data protection regulations like the European Union’s General Data Protection Regulation (EU GDPR) and the California Consumer Privacy Act (CCPA) (Bonta, 2023; Intersoft Consulting, n.d.). Implementing robust privacy-enhancing technologies and stringent data governance policies is vital for mitigating these risks.


  • Transparency and Explainability Risks: A lack of transparency and explainability in AI decisions can undermine trust, complicate compliance with regulatory requirements, and limit the effective oversight of AI systems. Transparent and explainable AI (XAI) enables stakeholders to understand how decisions are made, enhancing accountability and enabling effective risk management. This is particularly critical in high-stakes sectors like healthcare and finance, where understanding the rationale behind AI-driven decisions is essential for ethical, legal, and operational reasons (OECD.AI, 2024).

    The subsequent section explores practical, real-world examples of these risks and provides further context and insights into their impacts and management.


Risk Mitigation Techniques

Effectively mitigating AI risks requires a multifaceted strategy combining technical solutions, organizational policies, and ethical oversight. The following key techniques form the cornerstone of a comprehensive AI risk management program:


  • Bias Detection and Correction: Mitigating algorithmic bias involves preemptive and responsive strategies. Techniques such as fairness-aware machine learning, counterfactual analysis, and model audits detect and correct biases in datasets and algorithms. Widely adopted tools like IBM's AI Fairness 360 and Google's Fairness Indicators enable systematic bias evaluation and remediation during the model development and deployment phases (Google AI, 2023; Varshney, 2023). Organizations must also implement fairness-aware metrics (e.g., demographic parity, equalized odds) to ensure consistent and equitable treatment across groups.


  • Explainable AI (XAI): Explainable AI is vital for ensuring AI systems are transparent, trustworthy, and legally compliant. Tools and methodologies such as LIME (Local Interpretable Model-Agnostic Explanations), SHAP (SHapley Additive exPlanations), and interpreter dashboards facilitate understanding model behavior and decision logic. In sectors like healthcare and finance, where explainability is critical for ethical and legal accountability, adopting XAI helps meet obligations under frameworks like the EU AI Act and the NIST AI RMF (Molnar, 2023; NIST, 2023).


  • Privacy-Enhancing Technologies (PETs): To address the inherent privacy challenges in AI, Privacy-Enhancing Technologies (PETs) such as homomorphic encryption, federated learning, and differential privacy offer technical safeguards:

    • Differential privacy introduces statistical noise to datasets to protect individual privacy while preserving overall analytical value (Bonta, 2023; European Commission, 2023a).

    • Homomorphic encryption allows computations on encrypted data without decrypting, maintaining confidentiality throughout the processing lifecycle.

    • Federated learning enables collaborative AI training across decentralized devices while keeping raw data localized. It is crucial for compliance with EU GDPR and US Health Insurance Portability and Accountability Act (HIPAA) (European Commission, 2023; HHS, 2021).

    • Note: These technologies empower organizations to uphold data protection standards and reduce exposure to regulatory fines.


Table 1 summarizes the risks associated with AI and provides a structured summary of key techniques to mitigate common risks associated with artificial intelligence systems. Each technique combines a brief description, commonly used tools or methods, and relevant standards or frameworks. The goal is to support responsible AI development and deployment by aligning practical solutions with established ethical and regulatory benchmarks.

Table 1: Overview of Common AI Risks

Mitigation Technique

Description

Key Tools/Methods

Relevant Standards/Frameworks

Bias Detection and Correction

Identifies and mitigates discriminatory patterns in data and models

AI Fairness 360, Fairness Indicators, demographic parity, model audits

EU AI Act, NIST AI RMF

Privacy-Enhancing Technologies

Preserves data confidentiality during AI model training and inference

Homomorphic encryption, federated learning, and differential privacy

EU GDPR, CCPA, ISO/IEC 42001

Explainable AI (XAI)

Improves transparency and interpretability of AI decisions

LIME, SHAP, interpretable ML dashboards

NIST AI RMF, OECD AI Principles

AI Governance Frameworks

Establishes ethical, accountable, and policy-driven management of AI systems

Policy guidelines, role definition, risk registers, and audit trails

ISO/IEC 42001, NIST AI RMF, OECD AI Principles

 

Real-World Examples of AI Controversies and Incidents:

Examining recent real-world examples of artificial intelligence (AI) failures and controversies provides critical insights into the practical implications of AI risks. These case studies highlight the necessity of implementing rigorous and proactive risk management strategies across multiple sectors. By understanding these incidents, organizations can better anticipate potential risks, adopt preventive measures, and foster trust among stakeholders and users.


Significant issues continue to arise from AI implementations in healthcare. For instance, a 2025 report by Kodiak Solutions identified generative AI and cybersecurity risks as critical threats to healthcare providers. The integration of AI in healthcare systems has created challenges related to data accuracy, transparency of decision-making, and regulatory compliance, emphasizing the urgent need for robust governance and oversight (Galloro, 2025).


Financial services have also faced evolving AI risks. A notable incident in 2025 demonstrated significant vulnerabilities when a journalist successfully impersonated herself using an AI-generated voice during a call with her bank, gaining unauthorized access to sensitive account information. This case underscores the rising threat of AI-assisted fraud and the pressing need for improved security protocols (Galloro, 2025).


Social media and technology platforms also experienced notable controversies. Meta's AI chatbot drew severe criticism in 2025 after engaging in inappropriate interactions involving an AI persona voiced by John Cena. This event raised ethical concerns and questioned the effectiveness of current AI moderation and safeguarding practices (Galloro, 2025).

Additionally, Google reported misuse of its AI software in generating deep-fake terrorism content, receiving numerous complaints globally, highlighting critical gaps in the safeguards surrounding AI-generated content (Kaye, 2025).


AI-induced controversies have been profound in the legal and political sectors. In 2024, U.S. Senator Ben Cardin was targeted by a sophisticated deepfake operation involving an AI-generated video call impersonation, highlighting AI's potential role in political deception and espionage (Merica, 2024). Moreover, a lawyer representing MyPillow CEO Mike Lindell faced criticism in 2025 after submitting an AI-generated legal brief containing significant inaccuracies and fictitious case citations. This underscores the risk of inadequately supervised AI applications in critical legal contexts (Drapkin, 2025).


These contemporary examples vividly illustrate AI’s tangible risks and reinforce the necessity of robust risk management strategies. The following section delves deeper into specific techniques and frameworks organizations can utilize to mitigate these AI risks effectively.


Future-Proofing AI Implementations

Organizations must move beyond reactive risk controls and adopt proactive, adaptive strategies to sustain trust, performance, and compliance in an increasingly dynamic AI ecosystem. Future-proofing AI means designing effective systems and governance practices amid emerging technologies, evolving regulations, and shifting societal expectations. Below are comprehensive, alphabetized considerations for achieving resilient, long-term AI risk management:


  • Agile Governance Integration: Embedding AI risk oversight into enterprise-wide governance frameworks ensures adaptability and responsiveness. Leading practices include:

    • Aligning AI governance with existing GRC (Governance, Risk, and Compliance) functions,

    • Conducting regular risk re-evaluations based on emerging technologies. and

    • Creating cross-functional AI risk committees.

    • Note: This approach aligns with the ISO/IEC 42001:2023 principles, emphasizing continuous improvement and contextual alignment in AI risk management (ISO, 2023).


  • Cybersecurity and Privacy by Design: Given the convergence of AI and sensitive data, future-proofed AI systems must integrate:

    • Incident response plans tailored for AI misuse or adversarial attacks,

    • PETs like differential privacy, federated learning, homomorphic encryption, etc., and

    • Secure model architecture with encrypted inference pipelines and model monitoring.

These practices are reinforced by GDPR’s Article 25 and the NIST AI RMF’s emphasis on secure, privacy-preserving design (Intersoft Consulting, n.d.; NIST, 2023).


  • Ethical Impact Forecasting: Organizations must assess long-term ethical and societal implications of AI systems before and during deployment. Forward-looking ethical risk assessments should cover:

    • Automation bias and loss of human agency,

    • Disparate impacts on vulnerable populations, and

    • Emerging risks in synthetic media and misinformation.

    • Note: Frameworks like the OECD AI Principles and UNESCO’s AI Ethics Recommendations emphasize these foresight mechanisms to promote trustworthy AI (OECD.AI, 2024; UNESCO, 2024).


  • Human-Centered Resilience Design: AI systems should be built with human resilience in mind to remain adaptive. This includes:

    • Incorporating human-in-the-loop (HITL) controls for high-risk decisions,

    • Designing fallback protocols for system failure or drift, and

    • Ensuring explainability for non-technical users via XAI tools like SHAP and LIME.

    • Note: Such features support long-term operability and regulatory compliance in high-stakes fields like healthcare and finance (Molnar, 2023; European Commission, 2023).


  • Regulatory Horizon Scanning: Futureproofing requires ongoing global AI regulations and standards tracking. Organizations should:

    • Maintain a regulatory radar with updates on evolving laws (e.g., EU AI Act, China’s Generative AI Measures),

    • Participate in policy consultations or industry alliances, and

    • Update compliance protocols in anticipation of new enforcement measures.


This anticipatory approach is advocated in NIST’s RMF and is a core element of Singapore’s National AI Strategy 2.0 (NIST, 2023; Smart Nation Singapore, 2023).

These forward-looking practices enable organizations to build adaptive, secure, and ethically sound AI systems that remain effective despite regulatory shifts, technological change, and public scrutiny.


Conclusion: Governing AI Responsibly in a Rapidly Evolving World

As artificial intelligence continues to reshape global economies, industries, and governance structures, its transformative potential must be balanced with a commitment to risk mitigation, ethical integrity, and regulatory compliance. Deploying AI without robust safeguards has resulted in high-profile incidents involving bias, privacy breaches, and misinformation. This underscores the urgency of proactive governance strategies (Buolamwini & Gebru, 2018; NIST, 2023).


This article provides a comprehensive roadmap for organizations aiming to integrate AI technologies responsibly. It underscores the importance of:

  • Implementing structured risk management frameworks such as ISO/IEC 42001, NIST AI RMF, and Singapore’s Model AI Governance Framework.

  • Complying with emerging laws like the EU AI Act, China’s Generative AI Measures, and Brazil’s AI Bill No. 2338/2023.

  • Utilizing cutting-edge tools for fairness, explainability, privacy protection, and ethical alignment.


Understanding these frameworks is no longer optional for data privacy professionals, AI risk officers, compliance managers, and executives. It is a competitive and legal imperative. The ability to anticipate regulatory shifts, embed ethical foresight, and respond to stakeholder expectations will define market leaders in the age of AI.

As AI technologies evolve, so must the strategies that govern them. Organizations that invest in adaptive, transparent, and accountable AI systems will be best positioned to foster innovation, maintain public trust, and thrive in a world where data-driven intelligence defines success.


Key Questions to Ask When Implementing AI Risk Management Strategies

Implementing responsible and effective AI risk management is not a one-size-fits-all exercise. Whether you are a policymaker crafting legislation, a corporate leader deploying AI solutions, or a data scientist designing models, asking the right questions at the outset, and continuously, can dramatically enhance trust, resilience, and compliance. These guiding questions not only illuminate blind spots but also strengthen internal accountability, align stakeholders, and prepare AI deployments for regulatory scrutiny and public trust.


Table 2 outlines core questions tailored to key stakeholder groups involved in designing, deploying, regulating, and using artificial intelligence systems. These guiding questions support the implementation of trustworthy, ethical, and accountable AI by aligning technical practices, user rights, organizational governance, and policy frameworks with recognized standards and risk management principles.

 

Table 2: Core Questions for Effective AI Risk Management Across Stakeholders

Stakeholder

Category

Key Questions

Developers/Tech Teams

Documentation

Do we maintain detailed documentation on the model's purpose, risks, and limitations?

Fairness & Ethics

Can we detect bias using model auditing tools like SHAP or AI Fairness 360?


Feedback Loops

Is user feedback actively integrated into system updates and retraining cycles?


Privacy & Security

Have we implemented privacy-enhancing technologies (e.g., differential privacy, secure model architecture)?


Safety Engineering

Are fallback mechanisms in place to handle model failure, drift, or adversarial attacks?


Individuals/Users

Accountability & Redress

Do I have options to challenge or appeal to an AI-driven decision that affects me?

Consent & Rights

Can I access information on how decisions were made, and what rights I have under the EU GDPR, CCPA, or the EU AI Act?


Transparency

Is it clear when I am interacting with an AI system, and what data is collected?


Understanding AI Use

Is the organization transparent about its use of AI and how it affects outcomes in my services?


Organizations

Accountability

Do we have assigned roles and incident escalation processes for AI errors or bias?

Compliance & Oversight

Are our AI systems regularly audited and updated?


Governance Frameworks

Are we aligned with a structured AI governance framework like ISO/IEC 42001 or NIST AI RMF?


Human Oversight

Have we embedded human-in-the-loop decision-making for high-impact use cases?


Strategic Planning

Have we conducted a holistic AI risk assessment (technical, legal, ethical)?


Policymakers

Capacity Building

Do regulators have tools and training to assess and audit AI systems?

Enforcement

Are enforcement, appeal, and redress mechanisms in place for affected citizens?


Global Alignment

Are we harmonizing with global norms like the EU AI Act and OECD AI Principles?


Regulatory Design

Does our legal framework address risks in generative AI, bias, and algorithmic accountability?


Stakeholder Engagement

Are civil society, industry, and academic voices involved in shaping AI laws?


 References:

1.      AI Verify Foundation. (2020). Model AI governance for generative AI. https://aiverifyfoundation.sg/resources/mgf-gen-ai/

2.      Bonta, R. (2024). California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa

3.      Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research. https://proceedings.mlr.press/v81/buolamwini18a.html

4.      China Law Translate. (2023, July 13). Interim measures for the management of generative artificial intelligence services. https://www.chinalawtranslate.com/en/generative-ai-interim/

5.      Council of Europe. (2024). The framework convention on artificial intelligence. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence

6.      Data PrivacyBR Research. (2024). Artificial intelligence legislation: Technical analysis of the text to be voted on in the federal senate plenary. https://www.dataprivacybr.org/en/the-artificial-intelligence-legislation-in-brazil-technical-analysis-of-the-text-to-be-voted-on-in-the-federal-senate-plenary/

7.      Drapkin, A. (2025, April 29). AI gone wrong: An updated list of errors, mistakes, and failures. Tech.cohttps://tech.co/news/list-ai-failures-mistakes-errors.

8.      European Commission. (2023a). The Artificial Intelligence Act. https://artificialintelligenceact.eu/

9.      European Commission. (2023b). Legal framework of EU data protection. https://commission.europa.eu/law/law-topic/data-protection/legal-framework-eu-data-protection_en 

10. European Parliament. (2023). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

11. Galloro, V. (2024, December 12). Generative AI and cybersecurity are among the top risks for healthcare provider organizations in 2025, Kodiak Solutions finds. Businesswire.com. https://www.businesswire.com/news/home/20241212024344/en/Generative-AI-Cybersecurity-Among-Top-Risks-for-Healthcare-Provider-Organizations-in-2025-Kodiak-Solutions-Finds

12. Google AI. (2023). Responsible AI progress report. https://ai.google/responsibilities/responsible-ai-practices/fairness.

13. Intersoft Consulting. (n.d.). General Data Protection Regulation. https://gdpr-info.eu/

  1. ISO. (2023). ISO/IEC 42001:2023: Information technology—Artificial intelligence—Management system. https://www.iso.org/standard/81230.html

  2. Kaye, B. (2025, March 5). Google reports scale of complaints about AI deepfake terrorism content to Australian regulator. Reuters. https://www.reuters.com/technology/cybersecurity/google-reports-scale-complaints-about-ai-deepfake-terrorism-content-australian-2025-03-05/

  3. Merica, D. (2024). Sophistication of AI-backed operation targeting senator points to future of deepfake schemes. AP News. https://apnews.com/article/deepfake-cardin-ai-artificial-intelligence-879a6c2ca816c71d9af52a101dedb7ff

17. Molnar, C. (2023). Interpretable machine learning. GitHub. https://christophm.github.io/interpretable-ml-book/

  1. National Institute of Standards and Technology. (2023). AI risk management framework. https://www.nist.gov/itl/ai-risk-management-framework

  2. OECD.AI (2024). OECD AI principles overview. https://www.oecd.ai/en/ai-principles

  3. Personal Data Protection Commission (PDPC) - Singapore. (2025). Singapore’s approach to AI governance. https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework/

  4. Smart Nation Singapore. (2023). National AI strategy 2.0. https://www.smartnation.gov.sg/nais/#:~:text=Goals%20of%20NAIS%202.0&text=We%20intend%20to%20direct%20AI,population%20health%20and%20climate%20change.&text=Singapore%20will%20raise%20up%20individuals,confidence%2C%20discernment%2C%20and%20trust.

  5. US Department of Health and Human Services (HHS). (2021). HIPAA & your rights. https://www.hhs.gov/programs/hipaa/index.html

  6. UNESCO. (2024). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence.

24. Varshney, K.R. (2018, September 19). Introducing AI fairness 360. IBM Research. https://research.ibm.com/blog/ai-fairness-360

25. White & Case. (2025). AI watch: Global regulatory tracker – United States. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states

 

 

 

 

 

 
 
 
bottom of page