top of page
Search

Artificial Intelligence, Data Privacy, and Data Protection: Can They Co-Exist?

Overview

As artificial intelligence (AI) transforms organizations worldwide, the landscape of

data privacy and data protection laws and regulations are rapidly evolving to address new

emerging AI-posed challenges and risks. Governments and regulatory bodies are working

to balance innovation with individual freedoms and rights, ensuring that AI-driven data

processing remains transparent, secure, and ethical.

 

This article examines the possible co-existence of artificial intelligence (AI), data privacy,

and data protection globally. It reviews existing data privacy and data protection laws that

govern how organizations process personal information while respecting individual

freedoms and rights. It reviews AI guidance, AI governance frameworks, laws, and

regulations that require organizations to develop explainable, responsible, and trustworthy

AI systems that do not pose risks to individuals. It looks at industry, legal and regulatory

laws and regulations that promote AI innovation and AI deployment, while requiring

compliance with global data privacy and data protection requirements.

There are questions that ask:

·         Are these guidance documents, laws, and regulations in conflict?

·         Can we harmonize them to allow organizations to develop, produce, and use AI technologies while complying with data privacy and data protection laws and regulations?



Our World
Our World

The Advent of AI and Growing Data Privacy and Data Protection Concerns

AI relies on vast amounts of data to function effectively, often processing personal and

sensitive information. These activities raise concerns about accountability, bias, consent,

data ownership, and transparency. The increasing deployment of AI in decision-making

processes—ranging from hiring and lending to healthcare and law enforcement—has

intensified legal and regulatory scrutiny over how personal data is collected, used, and

safeguarded.


The Role of EU GDPR and Its Influence on AI Regulation

The European Union’s General Data Protection Regulation (EU GDPR) has changed the way

in which organizations must demonstrate due care and due diligence when collecting,

using, disclosing, retaining, and disposing of personal data. It provided individuals with

expanded individuals’ freedoms and rights. Enacted in 2016 and enforced in 2018, the EU

GDPR has influenced the way in which supranational organizations and nations view data

privacy and data protection today. Collectively, they espouse similar data privacy and data

protection principles like:

·         Lawfulness, fairness, and transparency in data processing.

  • Purpose limitation and data minimization.

  • Explicit consent for data collection.

  • The right to be informed, access, rectify, and erase data.

  • Accountability for data controllers and processors.

Under the EU GDPR, Article 22 ensures AI-driven automated decision-making, to include

profiling, is subject to strict rules, including the right of individuals to request a human

review in cases of automated decision-making without human participation (“the human in

the loop”).


The EU AI Act, Data Privacy, and Data Protection

The EU AI Act is the first comprehensive legal framework designed to regulate AI

technologies across the EU. It establishes a risk-based approach, categorizing AI systems

into four levels of risk: unacceptable, high, limited, and minimal. AI systems deemed

unacceptable, such as those used for social scoring or subliminal manipulation, are

outright banned. The EU AI Act began enforcing Article 5, which governs the use of

prohibited systems that pose unacceptable risks on February 2, 2025. High-risk AI

systems, including those used in critical sectors like healthcare, law enforcement, and

hiring, are subject to strict compliance requirements, such as transparency, human

oversight, and robust risk management. The Act also mandates obligations for general-

purpose AI models, including transparency in training data and safety measures.

Companies that violate the Act face substantial penalties, reaching up to €35 million or 7%

of global annual turnover, ensuring strict enforcement.

 

The EU AI Act has a strong nexus with data privacy and data protection, aligning closely

with the EU GDPR to safeguard individuals' freedoms and rights. Key EU AI Act

central concerns are bias, discrimination, and transparency, particularly present in high-

risk AI applications that process personal data. The Act requires AI developers to

mitigate bias in their decisions and recommendations. It ensures AI models do not

disproportionately disadvantage individuals based on protected attributes such as race,

gender, or disability. The Act’s transparency requirements demand that AI decision-making

processes be explainable, reducing the risks of algorithmic discrimination. Additionally, the

Act reinforces the EU GDPR’s data protection principles by mandating lawful data

processing, informed consent, and robust security for AI systems processing personal

data. This interplay between the EU AI Act and EU GDPR strengthens accountability and

fairness, ensuring AI technologies align with European fundamental rights and ethical

standards.


Data Privacy, Data Protection, and AI: A Geographical Overview

The following table highlights the conventions, guidance, laws, and regulations that are

shaping the global landscape for data privacy, data protection, and AI:

Region

Key Data Privacy and Data Protection Laws & Regulations

AI Frameworks, Guidance Laws, and Regulations

Africa

Kenya: Data Protection Act, 2019; Nigeria: Nigeria Data Protection Act, 2023; South Africa: Protection of Personal Information Act (POPIA); Other nations: Ghana, Egypt, Uganda, Rwanda

Kenya: AI governance framework under development; Nigeria: AI ethical guidelines; South Africa: AI subject to data minimization and accountability

Asia

China: Personal Information Protection Law (PIPL); India: Digital Personal Data Protection Act, 2023; Japan: Act on Protection of Personal Information (APPI); South Korea: Personal Information Protection Act (PIPA)

China: Interim Measures for Generative AI; India: AI regulations in draft focusing on responsible AI; Japan: Privacy-preserving AI technologies; South Korea: AI policies incorporating human oversight

Asia-Pacific

Australia: Privacy Act 1988 (under reform); Singapore: Personal Data Protection Act (PDPA); Vietnam: Personal Data Protection Decree (PDPD); Thailand: Personal Data Protection Act (PDPA)

Australia: AI Ethics Principles; Singapore: Model AI Governance Framework; Vietnam: Early AI regulations focused on ethical use; Thailand: AI guidelines in development

Central America

Costa Rica, Panama: GDPR-inspired privacy laws

Early discussions on AI governance

European Union (EU)

General Data Protection Regulation (GDPR); Law Enforcement Directive (LED)

EU AI Act (in development) to classify AI by risk and align with GDPR

Europe (Non-EU)

EEA: GDPR standards adopted; Switzerland: Federal Act on Data Protection (revFADP); UK: UK GDPR, DPA 2018, UK PECR

EEA: Ethical and human-centric AI policies; Switzerland: AI guidelines emphasize accountability; UK: AI regulatory roadmap advocating sectoral oversight

Europe-Asia (Türkiye)

KVKK (Law No. 6698) (amended for stronger oversight)

AI governance discussions aligning with EU standards

Middle East

UAE: Federal Data Protection Law, DIFC, ADGM sectoral laws; Saudi Arabia: PDPL (amended); Israel: Privacy Protection Law

UAE: Responsible AI initiatives; Saudi Arabia: National Strategy for Data & AI (NSDAI); Israel: AI regulations embedded in data privacy principles

North America

Canada: PIPEDA; Mexico: Federal Law on Protection of Personal Data Held by Private Parties

Canada: Artificial Intelligence and Data Act (AIDA); Mexico: Draft AI ethics guidelines

OECD & Council of Europe

OECD Privacy Guidelines; Convention 108+

OECD: Principles for AI governance; Convention 108+: AI discussions on fundamental rights

South America

Brazil: LGPD; Chile, Uruguay, Argentina: GDPR-influenced laws

Brazil: AI strategy for ethical AI and accountability; Other nations: Developing AI privacy frameworks

United States

California: CCPA/CPRA; Other states: Virginia, Colorado, Utah, Texas, etc.; Federal: Proposed American Privacy Rights Act (APRA)

AI Bill of Rights Blueprint; State laws addressing AI transparency in data processing

The International Organization for Standardization’s AI Guidance and Its Nexus with

Data Privacy and Data Protection

The International Organization for Standardization (ISO) has developed a structured

framework for artificial intelligence (AI) governance, risk management, and security. These

standards are designed to ensure that AI systems are deployed responsibly, ethically, and

securely while aligning with broader data privacy and data protection requirements. The

ISO has published several ISO AI standards that are relevant to the discussion of AI, data

privacy, and data protection:

  • ISO/IEC 23894:2023 – AI Risk Management

    • Overview: This standard focuses on identifying, assessing, and mitigating risks throughout the AI lifecycle, including ethical, security, and privacy risks.

    • Data Privacy and Data Protection Nexus: AI models often involve algorithmic decision-making that impacts individuals. This standard guides organizations in mitigating risks such as bias, discrimination, unauthorized data processing, and non-compliance with privacy regulations. It promotes transparency and fairness in AI applications.

  • ISO/IEC CD 27090 – Cybersecurity for AI Systems (In Development)

    • Overview: This forthcoming standard will address security threats and vulnerabilities specific to AI models, including adversarial attacks, model poisoning, and unauthorized access to training data.

    • Data Privacy and Data Protection Nexus: Strong AI cybersecurity directly impacts data protection. The standard will help prevent data breaches and leaks from AI models, ensuring compliance with global cybersecurity and data protection laws.

  • ISO/IEC 42001:2023 – AI Management System Standard

    • Overview: This is the first global standard for AI management systems, offering a structured approach for organizations to govern AI technologies responsibly. It provides guidelines for AI policies, risk assessments, compliance strategies, and continuous monitoring.

    • Data Privacy and Data Protection Nexus: AI systems process vast amounts of data, often including personal and sensitive information. ISO/IEC 42001 ensures that AI governance frameworks integrate privacy principles such as data minimization, lawful processing, and accountability—aligning with laws like the EU GDPR, CCPA, and China's Personal Information Protection Law.


ISO AI Standards and Their Alignment with Global Data Privacy and Data Protection

Frameworks

ISO’s AI guidance aligns with major international privacy and data protection frameworks,

including:

  • EU GDPR: Supports principles of transparency, accountability, and lawful processing of AI-driven personal data.

  • CCPA as amended by the CPRA (U.S.): Ensures AI systems respect consumer rights and data protection obligations.

  • Personal Information Protection Law (PIPL) (China): Provides a structured framework for ensuring AI compliance with China's stringent data localization and processing rules.

  • OECD AI Principles: Emphasizes human-centric AI that is fair, secure, and privacy-conscious.

  • Note: ISO’s AI guidance serves as a critical bridge between AI innovation and regulatory compliance, ensuring that AI systems respect data privacy, mitigate risks, and uphold ethical standards. Organizations that adopt these standards will not only enhance their AI governance but also strengthen their global data protection and privacy posture.


 The United States and AI-Driven Data Privacy Guidance and Laws

The U.S. Federal Government has taken steps to regulate AI and data privacy through various laws, executive orders, and agency-led initiatives. Key US Federal efforts include:

  • Algorithmic Accountability Act (Proposed) – Seeks to introduce federal oversight of AI and automated decision-making.

  • Federal Trade Commission (FTC) AI Oversight – The FTC has issued enforcement actions against companies that misuse AI or fail to protect consumer data.

  • Health Insurance Portability and Accountability Act (HIPAA) – Regulates the use of AI in healthcare, ensuring patient data privacy and security.

  • NIST AI RMF – Provides voluntary guidelines for responsible AI development and deployment.

  • US Presidential Executive Order Removing Barriers to American Leadership in Artificial Intelligence (2025) –This order revokes certain existing AI policies and directives that function as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.

 

US State-Level AI Initiatives and Legislation

Several U.S. states have introduced or enacted comprehensive data privacy laws with AI-

specific implications:

  • California: The CCPA/CPRA includes provisions that allow consumers to opt out of AI-driven automated decision-making and profiling. California also introduced the Automated Decision Systems Accountability Act, which proposes additional transparency for AI applications.

  • Colorado: The Colorado Privacy Act (CPA) includes AI governance provisions, requiring data protection assessments for high-risk AI processing.

  • Connecticut: The Connecticut Data Privacy Act requires businesses to provide consumers with transparency on AI decision-making.

  • Illinois: The Biometric Information Privacy Act (BIPA) specifically regulates AI used in biometric data processing, imposing strict consent requirements.

  • New York:

    • Artificial Intelligence Bill of Rights (Bill S8209) – Establishes consumer protections against AI-related harms, mandates fairness in AI decision-making, and promotes oversight mechanisms.

    • Legislative Oversight of Automated Decision-making in Government (LOADinG) Act (2024) – Requires state agencies to review and report AI software usage, prohibits AI-only decisions in critical services, and protects state employees from AI-driven job reductions.

    • New York State Department of Financial Services (NYDFS) AI Cybersecurity Guidance – Directs financial institutions to mitigate AI risks, such as fraud, social engineering, and misuse of nonpublic data.

  • Utah: The Utah AI Policy Act - The first U.S. state to enact a major artificial intelligence (AI) statute governing private-sector AI usage.

  • Virginia: The Virginia Consumer Data Protection Act (VCDPA) grants consumers the right to opt out of AI-driven targeted advertising and profiling.

These US state-level laws reflect a growing recognition of the need for AI governance in the

absence of US Federal-level AI and data privacy laws.


NIST AI Risk Management Framework, Data Privacy, and Data Protection

As AI continues to transform industries, governments and organizations must implement

effective frameworks to ensure AI systems are trustworthy, fair, and privacy-preserving.

The NIST has introduced the NIST AI Risk Management Framework (AI RMF) to provide

structured guidance for managing AI-related risks. The framework emphasizes

transparency, accountability, and reliability, aligning with broader global efforts to ensure

responsible AI deployment. A key focal point is the protection of data privacy and data

security, ensuring AI technologies comply with legal and ethical standards.

The NIST AI RMF, released in early 2023, is a voluntary framework designed to help

organizations identify, assess, and mitigate risks associated with AI systems. It is

structured around four core functions:

  • Govern – Establishing risk management policies, accountability measures, and governance structures.

  • Map – Identifying AI system capabilities, potential risks, and contexts in which AI operates.

  • Measure – Evaluating AI performance, reliability, and fairness through continuous assessment.

  • Manage – Implementing actions to mitigate risks and improve AI trustworthiness over time.

By following these principles, organizations can proactively address ethical, legal, and

security concerns related to AI systems, particularly in areas where personal data is

processed.


The Nexus Between NIST AI RMF, Data Privacy, and Data Protection

AI systems rely heavily on large datasets, often containing sensitive personal information.

The NIST AI RMF incorporates data privacy and data protection considerations into its

framework, aligning with regulations such as the EU GDPR, the CCPA as amended by the

CPRA, and sectoral US Federal laws like the Health Insurance Portability and

Accountability Act. The NIST AI RMF ensures that AI systems are designed with data privacy

principles in mind, reducing the risks related to data misuse, surveillance, and unauthorized

access.

Key privacy-enhancing measures within the NIST AI RMF include:

  • Data Minimization: Encouraging organizations to limit the collection and retention of personal data to what is strictly necessary.

  • Bias and Discrimination Mitigation: Ensuring AI models are regularly assessed for biased outcomes, reducing risks of algorithmic discrimination.

  • Transparency and Explainability: Mandating clear documentation of AI decision-making processes to ensure compliance with privacy regulations and consumer rights.

  • Security and Access Controls: Implementing robust cybersecurity practices to prevent data breaches and unauthorized AI-driven surveillance.


Strengthening AI Trustworthiness Through Data Privacy and Data Protection

A major concern with AI systems is the potential for privacy violations and ethical

breaches due to the opaque nature of AI decision-making. The NIST AI RMF promotes

practices such as privacy-preserving machine learning and federated learning,

which enable AI models to process data without exposing personally identifiable

information. These techniques help organizations balance AI innovation with compliance

and ethical responsibilities. Furthermore, differential privacy—a technique that adds

noise to datasets to protect individual identities—aligns with NIST’s emphasis on data

protection. By integrating these privacy-preserving mechanisms, organizations can ensure

that AI systems remain secure, fair, and aligned with human rights values.

The NIST AI RMF provides a structured approach to managing AI risks, ensuring

compliance with data privacy principles. By incorporating privacy by design, bias

mitigation, transparency, and security measures, the framework helps organizations

deploy AI technologies responsibly and ethically. As AI continues to evolve, adherence to

frameworks like the NIST AI RMF will be crucial in safeguarding individuals’ privacy while

fostering trust in AI-driven innovations.


Key Questions Businesses Should Ask Before Adopting AI Capabilities

  • How can organizations align their AI adoption and AI integration initiatives with existing global data privacy and global data protection laws and regulations?

  • How can organizations develop effective AI governance frameworks that comply with global data privacy and data protection laws and regulations?

  • How can organizations manage cross-border data transfers and implement robust security measures to comply with global data privacy and protection laws in AI applications?


Conclusion

Can AI, data privacy, and data protection co-exist in today’s world? Yes, they can co-

exist. The rapid advancement of AI is transforming global data privacy and data protection

laws and regulations. Organizations, policymakers, regulators, and self-regulatory entities

must harmonize legal and regulatory frameworks, enhance enforcement mechanisms, and

ensure AI regulations emphasize accountability, ethics, explainability, responsibility, and

transparency to successfully navigate this challenging landscape.

 

AI governance, data privacy, and data protection are more than legal obligations; they

create competitive and strategic advantages for those organizations that embrace them

holistically. Ethical, responsible, and trustworthy AI systems deployment build consumer

trust, reduce legal risks, and strengthen competitive positioning. To maintain this

competitive and strategic advantage, organizations must invest in robust AI governance

models and align their data privacy and data protection strategies with global accepted

best practices.

 

The future of AI governance, data privacy, and data protection will rely on a proactive and

collaborative approach. Stakeholders must work together to ensure that AI continues to

drive innovation while respecting individual freedoms and rights. They must work

continuously to harmonize AI, data privacy, and data protection so that they can co-exist in

today’s world.


References

 
 
 

Comments


bottom of page