top of page
Search

Shadow Profiles and Synthetic Identities: AI’s Role in the Rise of Invisible Digital Entities

Updated: May 28

Introduction

Imagine applying for a job, a loan, or healthcare and being denied without understanding why. Now, consider that the decision was not based on information you knowingly provided. It was based on a profile assembled without your consent. This scenario is increasingly common due to the rise of shadow profiles and synthetic identities generated by artificial intelligence (AI) systems. AI technologies have transformed how personal data is collected, inferred, and utilized. This article explores how AI-generated profiles challenge existing AI and data protection governance and legal and regulatory compliance efforts. It also raises questions about legal personhood, consent, and accountability.


Key Concepts Simplified

A foundational understanding of shadow profiling and synthetic identity terminology is essential. Below is a glossary of key terms used throughout this article:

  • Algorithmic Opacity: Refers to the lack of transparency in how AI systems operate, often making it difficult even for their developers to explain how decisions are made (Burrell, 2016). This opacity complicates accountability and legal review.

  • Automated Data Profiling: The use of AI and algorithmic systems to analyze and infer personal characteristics, behaviors, or preferences from data. This type of profiling is typically done without direct human oversight and underpins practices like shadow profiling and the generation of synthetic identities.

  • Consent Fatigue: A phenomenon where users become desensitized to frequent data permission requests, often leading to uninformed or passive consent. This issue is especially problematic when data is inferred rather than explicitly collected (Barker, 2023; Taubman & Bassiran, 2019).

  • Data Minimization: A principle under regulations like the European Union’s General Data Protection Regulation (EU GDPR) requires that data collection be limited to what is necessary for a specific purpose. Shadow profiles often defy this principle by compiling extensive inferred data.

  • Inference Regulation: A proposed policy framework to regulate data inferred by algorithms, such as predicted traits or behaviors. Unlike directly collected data, inferred data currently exists in a legal gray zone.

  • Inferred Data: Information generated through the analysis of patterns and behaviors not directly provided by the individual. This data is a cornerstone of shadow profiling and synthetic identity construction.

  • Profiling: Any automated data processing intended to analyze or predict aspects of an individual's behavior, performance, or characteristics. Profiling can include credit scoring, behavioral advertising, and fraud detection.

  • Shadow Profile: A digital dossier compiled using data that an individual did not knowingly provide. These profiles are often constructed through the actions and associations of other users or from public data scraped by AI systems.

  • Synthetic Identity: A false identity that combines real and fictional data elements. Often used in financial fraud, system testing, or machine learning, synthetic identities can also inadvertently impact real individuals.

Before exploring how AI systems operate in practice, it is helpful to visualize the lifecycle of a synthetic identity. The infographic below illustrates the core stages: creation, usage, detection, and impact. It highlights how synthetic identities evolve and why they present significant challenges to individuals and institutions.

  

AI’s Invisible Architecture

AI has evolved beyond mere data analysis; it predicts, infers, and constructs complex digital profiles that significantly impact individuals' lives. Often developed without explicit consent, these profiles can influence employment, finance, and healthcare decisions. For instance, financial institutions employ AI to detect fraudulent activities by analyzing vast transaction data. They assign real-time risk scores to prevent unauthorized transactions (Villano, 2025).


However, the same technologies that offer protection can also be exploited. Malicious actors have increasingly used generative AI tools to create synthetic identities. Specifically, they have used combinations of real and fabricated information to commit fraud. The Federal Reserve has raised concerns about the rise of synthetic identity fraud facilitated by generative AI, emphasizing enhanced security measures (Ibitola, 2023; Plaid, 2024; Walk-Morris, 2025).


Moreover, the phenomenon of "shadow AI," where employees use unapproved AI tools without organizational oversight, poses significant risks (Graf, 2023; Gupta, 2024; Hautala, 2018; Mohapatra, 2025). Such practices can lead to data breaches and other compliance issues, as unauthorized AI applications may lack the necessary security protocols (Robinson, 2024; Slagg, 2025).


The dual nature of AI, as both a tool for protection and a potential threat, underscores the importance of robust governance and ethical considerations in its deployment. Understanding these dynamics is crucial as we delve into the legal challenges and gaps associated with AI-generated shadow profiles and synthetic identities.


Legal Challenges and Gaps

Countries and U.S. states have begun developing legal tools to govern AI profiling and synthetic identity use, but responses vary in scope, strength, and enforceability. Below is an integrated overview of how international and U.S. state legal and regulatory strategies address, or fail to address, these emerging risks.

International Regulations:

  • Brazil: The draft AI Bill promotes human rights and fairness but lacks explicit measures addressing synthetic identity creation or profiling practices (Freitas & Giacchetta, 2024).

  • China: The Interim Measures for Managing Generative Artificial Intelligence Services require transparent labeling, quality controls, and discrimination safeguards, indirectly targeting misuse of AI-generated identities and profiling (China Law Translate, 2023).

  • European Union: The EU AI Act regulates high-risk AI uses like profiling and mandates human oversight and transparency (European Parliament, 2025), though synthetic identity use is only indirectly covered.

  • India: India’s draft AI policy and Digital India Act emphasize responsible AI but do not currently regulate profiling or synthetic data practices (Mohanty & Sahu, 2024).

  • Singapore: Through the Personal Data Protection Commission and the National AI Strategy, Singapore encourages explainability and accountability in AI use. Its guidance on synthetic data generation addresses privacy and risk implications (Baig et al., 2024).

  • United Kingdom: The UK government’s AI regulation white paper (2023) and the UK ICO’s AI and data protection guidance emphasize transparency, human oversight, and fairness. While not explicitly regulating synthetic identities, they seek to address profiling risks and AI-driven harms (Gov.UK, 2023; ICO, 2023).


Table 1 displays a comparative analysis of different global laws and regulations as they apply to shadow profiling and synthetic identities:

Table 1: Comparative Summary Table – Global

Jurisdiction

Shadow Profiling Addressed?

Synthetic Identities Addressed?

Notable Instruments

Brazil

Draft AI Bill

China

GenAI Measures

European Union

❌ (indirect)

EU AI Act, EU GDPR

India

Draft AI Bill

Singapore

PDPC, National AI Strategy

United Kingdom

❌ (indirect)

ICO AI Guidance, AI Regulation White Paper

 

Table 2 displays a comparative analysis of different US state laws as they apply to shadow profiling and synthetic identities:

 

Table 2: Comparative Summary Table – U.S. States

State

Shadow Profiling Addressed?

Synthetic Identities Addressed?

Notable Instruments

California

✅ (via CPRA, profiling rights)

❌ (not explicitly)

California Consumer Privacy Act as amended by the California Privacy Rights Act (Magramo, 2024)

Colorado

✅ (via AI Act, risk disclosures)

❌ (only indirectly)

Colorado AI Act

Utah

❌ (general profiling not covered)

✅ (via transparency mandates)

AI Policy Act

 

While most frameworks promote transparency and accountability, only a few directly address synthetic identities. Shadow profiling is more broadly recognized, though protections often rely on general data rights rather than AI-specific rules. Stronger and more unified global standards remain necessary to address these expanding data protection risks.


Ethical and Societal Stakes: Navigating the Unseen Perils of AI-Driven Identities

In an era where AI increasingly mediates our interactions, decisions, and perceptions, the ethical and societal implications of AI-generated shadow profiles and synthetic identities demand urgent attention (Graf, 2023; Gupta, 2024; Hautala, 2018; Mohapatra, 2025). Often developed without individuals' knowledge or consent, these covert constructions challenge foundational principles of fairness, autonomy, and privacy. This section delves into the multifaceted ethical concerns arising from these AI practices, setting the stage for a deeper exploration of their legal and regulatory ramifications in the subsequent analysis.

  • Autonomy and Informed Consent: The pervasive deployment of AI in decision-making processes raises significant concerns about individual autonomy. Users often remain unaware of the extent to which AI systems influence their opportunities and choices, from job applications to credit approvals. This lack of transparency undermines informed consent, as individuals cannot contest or understand decisions made by opaque algorithms (Weching, 2024). The erosion of autonomy is further exacerbated when AI systems generate synthetic identities or shadow profiles, making decisions based on inferred data without explicit user input.

  • Bias and Discrimination: AI systems, trained on historical data, can inadvertently perpetuate and amplify existing societal biases (Chapman, n.d.). In recruitment, AI-driven tools have been found to disadvantage candidates with non-native accents or speech-affecting disabilities, primarily due to biased training datasets and a lack of transparency in decision-making processes (Taylor, 2025). Similarly, in law enforcement, predictive policing algorithms have been criticized for disproportionately targeting minority communities, reflecting entrenched biases in the data they analyze (Barnhart, 2024). These instances underscore the ethical imperative to scrutinize and rectify biases embedded within AI systems.

  • Surveillance and Privacy Intrusions: Shadow profiling, wherein AI constructs detailed user profiles from disparate data sources, poses profound privacy challenges (Graf, 2023; Gupta, 2024; Hautala, 2018; Mohapatra, 2025). Often created without user awareness, these profiles enable intrusive surveillance and targeted manipulation. The unauthorized use of AI tools, or "shadow AI," within organizations can lead to data breaches and non-compliance with privacy regulations, exposing individuals to risks without their knowledge (Gupta, 2024). Such practices infringe on personal privacy and erode public trust in digital systems.


As these ethical dilemmas grow in scale and visibility, it becomes increasingly clear that principles alone are insufficient to safeguard individual rights and institutional trust. What is needed now is a robust, coordinated response that transforms ethical imperatives into actionable strategies. The following section outlines essential policy, legal, and technical interventions designed to address the expanding threats of synthetic identities and shadow profiling.


Policy Solutions: Confronting the Threat of Synthetic Identities and Shadow Profiling

The escalating prevalence of shadow profiling and synthetic identities presents profound challenges to data protection, security, and trust in the digital age. These sophisticated forms of identity manipulation compromise individual autonomy and expose organizations to significant financial and reputational risks. Addressing these issues necessitates a multifaceted policy approach integrating legal reforms, technological safeguards, and public education. This section outlines critical, actionable strategies for governments, institutions, and individuals to mitigate these threats and reinforce ethical AI governance.

  • Collaborative Efforts: Building resilient defenses against synthetic identities and shadow profiling requires coordinated partnerships across sectors, including:

    • Encourage Industry Standards and Self-Regulation: Industry groups should develop and adhere to standards that address the ethical use of AI and data management. Self-regulatory frameworks can complement legal measures, promote responsible practices, and foster public trust (FedPayments Improvement, 2021).

    • Foster Cross-Sector Collaboration: Combating synthetic identities and shadow profiling requires joint initiatives among government agencies, private companies, and international partners. Information sharing and cooperative research can improve collective resilience (FedPayments Improvement, 2021).

  • ·Legal and Regulatory Reforms: Establishing clear legal definitions and obligations is foundational to deterring identity misuse and AI-driven profiling, including:

    • Expand Legal Definitions to Include Synthetic and Inferred Data: Data protection laws and regulations must explicitly recognize synthetic identities and inferred data to close enforcement gaps. Including these categories in legal frameworks strengthens regulatory oversight and legal recourse (Omaar & Castro, 2024).

    • Mandate Transparency and Accountability in AI Systems: Organizations should be required to disclose AI tools used for profiling or identity inference. Mandatory audits and risk assessments ensure alignment with ethical and legal standards (Omaar & Castro, 2024).

  • Public Awareness and Education: Raising awareness about synthetic identities and shadow profiling helps empower individuals and reduce systemic vulnerabilities, including:

    • Launch Comprehensive Awareness Campaigns: Education initiatives can inform the public about risks and prevention strategies, including monitoring credit activity and protecting personal data (FedPayments Improvement, 2021).

    • Provide Resources for Victims: Offering recovery support, which includes legal assistance, identity restoration guidance, and fraud monitoring, can mitigate long-term impacts on victims of identity misuse (Custers & Vrabec, 2024; FedPayments Improvement, 2021).

  • Technological Safeguards: Integrating secure design principles and intelligent detection systems into AI architectures can mitigate the misuse of synthetic identities.

    • Implementing Privacy-by-Design Principles: AI systems must prioritize user privacy from inception, including data minimization, anonymization, and access control mechanisms (Omaar & Castro, 2024).

    • Strengthening Identity Verification Mechanisms: Advanced verification technologies, such as biometric analysis and multi-factor authentication, can help detect and prevent synthetic identity fraud (FedPayments Improvement, 2021).


    By implementing these structured, multi-level solutions, stakeholders can establish a robust defense against the expanding threats posed by shadow profiling and synthetic identity fraud (Lindsay, 2025; Texas Capital Bank, 2024). The final section reflects these findings and highlights pathways for proactive, principled adaptation in a rapidly evolving digital ecosystem.


Call to Action: Confronting the Rise of Synthetic Identities and Shadow Profiling

Individuals and organizations must act decisively to combat the evolving threats of synthetic identities and shadow profiling. Below are four practical, high-impact steps to help reduce exposure, build resilience, and promote ethical digital practices:

  • Understanding the Threat: Recent incidents underscore the severity of these risks. They include:

    • Deepfake Scams in Professional Settings: Professionals like Nicole Yelland have fallen victim to elaborate job scams involving deepfake technology, leading to increased paranoia and rigorous verification processes in professional communications (Goode, 2025).

    • Surge in SIM-Swap Fraud: The UK experienced a dramatic rise in SIM-swap fraud cases, increasing from 289 in 2023 to nearly 3,000 in 2024. Fraudsters exploit mobile network vulnerabilities to hijack identities and access sensitive information (Ellery & Sellman, 2025).

    • Synthetic Identity Fraud Escalation: Synthetic identity fraud has become the fastest-growing type of fraud in 2024. Criminals combine real and fake information to create new identities, leading to increased fraud against government agencies and financial institutions (FinTech Global, 2025; Williams, 2024).

  • Taking Action: To combat these evolving threats, consider the following steps:

    • Audit Your Digital Footprint: Regularly review and manage the personal information you share online. Utilize privacy settings on social media platforms and be cautious about the data you disclose.

    • Demand Ethical AI Practices: Encourage organizations to prioritize ethical considerations in AI development, ensuring that technologies are designed with privacy-by-design principles and are subject to regular audits for compliance and fairness.

    • Enhance Verification Protocols: Implement multi-factor authentication and other robust verification methods to protect against unauthorized access and identity fraud.

    • o   Stay Informed and Vigilant: Keep abreast of the latest cybersecurity threats and best practices developments. Participate in awareness programs and educate others about the importance of digital security.

    • Support Robust Digital Policies: Advocate for and support policies that enhance digital rights, promote transparency in AI systems, and enforce strict laws and regulations against unauthorized data collection and usage.

    • By taking these proactive measures, individuals and organizations can significantly reduce the risks associated with synthetic identities and shadow profiling. Collective vigilance and commitment to ethical digital practices are essential in safeguarding our digital future.


Conclusion: Your Digital Twin Has Power

In the rapidly evolving digital landscape, the emergence of shadow profiling and synthetic identities presents profound challenges to personal autonomy, data protection, and societal trust. These digital constructs are meticulously assembled from fragmented data points. They can influence decisions ranging from credit approvals to employment opportunities, often without individuals' knowledge or consent (Gupta, 2024).


The concept of a "digital twin" extends beyond industrial applications, encapsulating detailed virtual representations of individuals. While these models offer personalized healthcare and predictive analytics benefits, they also raise significant ethical concerns (Methuku & Myakala, 2025). Unauthorized creation and manipulation of digital twins can lead to identity fragmentation, psychological distress, and exploitation (Kemskov, 2024).

Synthetic identities further complicate the picture. Constructed from a blend of real and fabricated data, these identities are used in sophisticated fraud schemes that undermine public trust and institutional security. Their growing use in impersonation scams, financial deception, and systemic exploitation illustrates how AI can be weaponized to erode the very fabric of digital trust. These identities evade conventional detection systems and are difficult to trace back to a human source, making regulatory responses more urgent (Moody, 2024).


Shadow AI is the unsanctioned use of AI tools within organizations, which adds another risk layer (Zielinski, 2025). Often seeking efficiency, employees may inadvertently expose sensitive data to unvetted AI systems. These unauthorized actions can contribute to potential compliance violations, data breaches, and other severe security incidents (Zielinski, 2024). This clandestine adoption of AI technologies underscores the urgent need for robust governance frameworks and ethical oversight.

The convergence of these technologies necessitates a reevaluation of existing ethical, legal, and regulatory frameworks. Current laws and regulations often lag behind technological advancements, creating gaps that can be exploited. Developing comprehensive policies that address data ownership, consent, and accountability is imperative to safeguard individual rights and societal values. These policies must aggressively counter the misuse of fabricated identities in the digital realm (Ghage, 2024; Mohapatra, 2025).


As we navigate this complex digital frontier, the power of our digital twins and the growing threat of shadow profiles and synthetic identities demand our attention. By fostering transparency, enforcing ethical standards, and advocating robust regulatory measures, we can ensure that technological progress aligns with the fundamental principles of human dignity and autonomy.


Essential Questions for a Responsible Digital Future

The conversation does not end here. The questions we ask today will shape the future of our digital society. Below is a table of essential questions that individuals, organizations, and policymakers must consider when developing ethical, legal, and practical responses to synthetic identities and shadow profiling.


Table 3: Essential Questions We Should Consider Answering

Stakeholder

Essential Questions

Individuals

What data about me exists beyond my control, and who has access to it?

Can I be judged or denied opportunities based on information I never shared?


If my digital twin is wrong, how do I correct it, and who is responsible?


Am I consenting to more than I understand when I accept terms online?


What steps can I take today to reclaim some control over my digital identity?


Organizations

Are we collecting or inferring data about people who never permitted us?

Can we justify the creation of profiles that individuals cannot see or challenge?


What would transparency look like if we explained our data use to those being profiled?


Are we prepared for the reputational fallout of a breach involving synthetic identities?


How are we monitoring our employees' use of shadow AI tools, and what risks do they pose?


Policymakers

Do current laws adequately cover inferred, synthesized, or fabricated data?

Should synthetic identities and shadow profiles have the same regulatory protections as traditional PII?


What safeguards are in place to ensure that profiling does not lead to algorithmic discrimination?


How can legislation balance innovation in AI with the need to protect personal dignity and agency?


What oversight mechanisms are needed to audit and hold accountable systems that operate invisibly?


 

References

  1. Baig, A., Gardezi, S.E., & Khan, S. (2024, October 26). An overview of Singapore’s proposed guide on synthetic data generation. Securiti.ai. https://securiti.ai/singapore-proposed-guide-on-synthetic-data-generation/

  2. Barker, S. (2023, November 16). What is consent fatigue? Didomi Blog. https://www.didomi.io/blog/what-is-consent-fatigue.

  3. Barnhart, J. (2024, October 30). Ethical AI in law enforcement: Navigating the balance between innovation and responsibility. Police1. https://www.police1.com/investigations/ethical-ai-in-law-enforcement-navigating-the-balance-between-innovation-and-responsibility

  4. Bilton, N. (2024, May 8). Identity theft is a “Kafkaesque” nightmare. AI makes it worse. Vanity Fair. https://www.vanityfair.com/news/story/identity-theft-ai-deepfakes

  5. Burrell, J. (2016, January 6). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), https://doi.org/10.1177/2053951715622512

  6. Butler, S. (2021, December 10). What are Facebook shadow profiles, and should you be worried? How-To-Geek. https://www.howtogeek.com/768652/what-are-facebook-shadow-profiles-and-should-you-be-worried/

  7. Chapman University. (n.d.). Bias in AI. https://www.chapman.edu/ai/bias-in-ai.aspx

  8. China Law Translate. (2023, July 13). Interim measures for the management of generative artificial intelligence services. https://www.chinalawtranslate.com/en/generative-ai-interim/

  9. Custers, B., & Vrabec, H. (2024, April). Tell me something new: Data subject rights applied to inferred data and profiles. Computer Law & Security Review. https://doi.org/10.1016/j.clsr.2024.105956

  10. Ellery, B., & Sellman, M. (2025, May 11). The rising menace of mobile phone fraud – how hackers took control of M&S. The Times. https://www.thetimes.com/uk/crime/article/cases-of-sim-swap-fraud-the-method-used-to-hack-m-and-s-surge-3bhs5csff?msockid=0182f74556b26201344be2ab572563c5

  11. European Parliament. (2025). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

  12. Federal Payments Improvement. (2021). Synthetic identity fraud defined. The Federal Reserve. https://fedpaymentsimprovement.org/strategic-initiatives/payments-security/synthetic-identity-payments-fraud/synthetic-identity-fraud-defined/

  13. FinTech Global. (2025, February 24). Navigating the threat of synthetic identity fraud in today’s digital world. https://fintech.global/2025/02/24/navigating-the-threat-of-synthetic-identity-fraud-in-todays-digital-world/

  14. Freitas, C.T., & Giacchetta, A.Z. (2024, May 13). AI watch: Global regulatory tracker – Brazil. White & Case. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-brazil

  15. Ghage, N. (2024, July). Digital identity in the age of cybersecurity. Global Journal of Computer Science & Technology. http://dx.doi.org/10.34257/LJRCSTVOL24IS1PG1

  16. Giuffre, M., & Shung, D.L. (2023, October 9). Harnessing the power of synthetic data in healthcare. KPMG. https://kpmg.com/us/en/articles/2022/synthetic-identity-fraud.html

  17. Goode, L. (2025, May 12). Deepfakes, scams, and the age of paranoia. Wired. https://www.wired.com/story/paranoia-social-engineering-real-fake/

  18. Gov.UK. (2023, August 3). A pro-innovation approach to AI regulation. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

  19. Graf, J. (2023, September 22). Investigating shadow profiles: The data of others. Tech Xplore. https://techxplore.com/news/2023-09-shadow-profiles.html

  20. Gupta, A. (2024, February 6). The dangers of uncontrolled AI: Shadow AI and ethical risks. Securiti. https://securiti.ai/blog/dangers-of-uncontrolled-ai/

  21. Hautala, L. (2018, April 11). Shadow profiles. Facebook has information you didn’t hand over. CNET. https://www.cnet.com/news/privacy/shadow-profiles-facebook-has-information-you-didnt-hand-over/

  22. Ibitola, J. (2023, September 16). Risk profiling amidst deceptive identities. Flagright. https://www.flagright.com/post/risk-profiling-amidst-deceptive-identities?utm_source=chatgpt.com

  23. Jones, M.A., & Canter, J. (2022, May 26). California AG interprets “inferences” under CCPA. Crowell. https://www.crowell.com/en/insights/client-alerts/california-ag-interprets-inferences-under-ccpa

  24. Lindsay, J. (2025, March 31). Synthetic identity fraud: How AI is changing the game. Federal Reserve Bank of Boston. https://www.bostonfed.org/publications/six-hundred-atlantic/interviews/synthetic-identity-fraud-how-ai-is-changing-the-game.aspx

  25. Lu, S. (2020). Algorithmic opacity, private accountability, and corporate social responsibility. Vanderbilt Journal of Entertainment and Technology Law. https://scholarship.law.vanderbilt.edu/jetlaw/vol23/iss1/3/

  26. Magramo, K. (2024, May 17). British engineering giant Arup revealed as $25 million deepfake scam victim. CNN. https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk

  27. Methuku, V., & Myakala, P.K. (2025, February 28). Digital doppelgangers: Ethical and societal implications of pre-mortem AI clones. Arvix. https://doi.org/10.48550/arXiv.2502.21248

  28. Mohanty, A., & Sahu, S. (2024, November 21). India’s advance on AI regulation. Carnegie India. https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?center=india&lang=en

  29. Mohapatra, D. (2025, March 2). Shadow AI. The hidden risks & how to identify AI compliance violations. Cognitive View. https://blog.cognitiveview.com/shadow-ai-the-hidden-risks-how-to-identify-ai-compliance-violations/

  30. Moodys. (2025, February 18). Synthetic identities and why they are important in today’s digital landscape. https://www.moodys.com/web/en/us/kyc/resources/insights/synthetic-identities-and-why-they-are-important-in-todays-digital-landscape.html

  31. Omaar, H., & Castro, D. (2024, May 20). Picking the right policy solutions for AI concerns. Center for Data Innovation. https://www2.datainnovation.org/2024-ai-policy-solutions.pdf

  32. Plaid. (2024, May 1). Synthetic identity fraud: How to detect and prevent it. https://plaid.com/resources/fraud/synthetic-identity-fraud/

  33. Robinson, B. (2024, August 2). ‘Shadow AI’: The controversial trend that could create disaster, experts say. Forbes. https://www.forbes.com/sites/bryanrobinson/2024/08/02/shadow-ai-the-controversial-2024-trend-that-could-create-disaster-experts-say/?utm_source=chatgpt.com

  34. Saunders, T., & Prescott, K. (2024, July 10). Deepfake fraudsters impersonate FTSE chief executives. The Times. https://www.thetimes.com/business-money/technology/article/deepfake-fraudsters-impersonate-ftse-chief-executives-z9vvnz93l?msockid=0182f74556b26201344be2ab572563c5

  35. Slagg, A. (2025, January 10). Shadow AI: Shining light on a growing security threat. Fed Tech. https://fedtechmagazine.com/article/2025/01/shadow-ai-a-growing-security-threat-perfcon

  36. Taubman-Bassiran, T. (2019, January 29). How to avoid consent fatigue. IAPP. https://iapp.org/news/a/how-to-avoid-consent-fatigue/

  37. Taylor, J. (2025, May 13). People interviewed by AI for jobs face discrimination risks, Australian study warns. The Guardian. https://www.theguardian.com/australia-news/2025/may/14/people-interviewed-by-ai-for-jobs-face-discrimination-risks-australian-study-warns

  38. Tech Xplore. (2023). Investigating shadow profiles: The data of others. https://techxplore.com/news/2023-09-shadow-profiles.htmlTech Xplore

  39. Texas Capital Bank. (2024, October). The nightmare of fake identities: Understanding synthetic identity fraud. https://www.texascapitalbank.com/insights/october-2024-nightmare-fake-identities

  40. UK Information Commissioner’s Office. (2023). Guidance on AI and data protection. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

  41. Villano, M. (2025, May 12). At Mastercard, AI is helping to power fraud-detection systems. Business Insider. https://scholarship.law.vanderbilt.edu/jetlaw/vol23/iss1/3/

  42. Walk-Morris, T. (2025, April 3). Fed raises alarm on synthetic identity fraud. Payments Dive. https://www.paymentsdive.com/news/federal-reserve-alarm-synthetic-identity-fraud-scams/744337/

  43. Weching, L. (2024, October 3). Inevitable challenges of autonomy: ethical concerns in personalized algorithmic decision-making. Humanities & Social Sciences Communications. https://www.nature.com/articles/s41599-024-03864-y

  44. Williams, J. (Host). (2022 – present). Synthetic identity fraud rising since 2024. [Audio podcast]. Statescoop. https://statescoop.com/radio/synthetic-identity-fraud-public-sector-2024/

  45. Zemskov, A.D. (2024, December 18). Security and privacy of digital twins for advanced manufacturing: A study. Arvix. https://doi.org/10.48550/arXiv.2412.13939

  46. Zielinski, D. (2025, May 9). Shadow AI is on the rise: Why it matters to HR. SHRM. https://www.shrm.org/topics-tools/flagships/ai-hi/shadow-ai-on-the-rise

 

 
 
 

Comments


bottom of page