US AI Legislation in Pieces: How U.S. States Are Reshaping U.S Artificial Governance and Enforcement Efforts
- christopherstevens3
- Jun 6
- 18 min read

Introduction
The accelerated deployment of Artificial Intelligence (AI) across industries has raised questions about how personal and sensitive data is collected, processed, and protected. AI systems used in law enforcement, healthcare, employment, or marketing routinely rely on vast datasets containing personal and sensitive information. Nevertheless, this rapid expansion has far outpaced the development of cohesive U.S. federal and state laws and regulations, resulting in significant gaps in AI governance and data privacy oversight.
In response, several U.S. states have enacted their AI-specific laws aimed at addressing issues such as algorithmic discrimination, transparency, and individual rights. While these initiatives offer vital protection, they have also created a fragmented and inconsistent legal and regulatory landscape. Individuals face varying levels of data privacy challenges depending on their state of residence in the United States. Additionally, organizations must grapple with compliance across diverse legal frameworks.
This patchwork has prompted growing calls from lawmakers, cybersecurity experts, and industry leaders. They are asking for a closer partnership between the U.S. federal and state governments. Such coordination is essential to harmonize AI governance, reduce legal uncertainty, and ensure that data privacy protections are both robust and uniformly applied across the nation (Rasmussen, 2025).
This article examines how U.S. state-level AI legislation is reshaping AI governance and data privacy in the United States. Next, it highlights the challenges of legal and regulatory fragmentation. Ultimately, it examines how a collaborative approach between the U.S. federal government and U.S. states can promote enhanced oversight and public trust in AI. Note: Please review Appendix I for a glossary of key terms used throughout the article.
The Emergence of U.S. State-Level AI Laws
As the U.S. federal government struggles to enact comprehensive AI governance and data privacy legislation, U.S. states are rapidly filling the regulatory vacuum. By June 2025, dozens of states will have proposed or enacted AI-specific laws. These legislative efforts reflect growing concern about the ethical, economic, and privacy implications of AI. However, they also reveal the complexities of governing a rapidly evolving technology within a fragmented legal and regulatory landscape.
As U.S. states move to regulate AI independently, their approaches vary significantly in scope and complexity. Some states, like Colorado and Connecticut, have adopted comprehensive laws that introduce enforceable rules for high-risk systems, algorithmic accountability, and public transparency. Others, such as Arkansas and Utah, have focused on limited legal and regulatory actions. The following figure categorizes U.S. states by the scope of their AI laws to highlight the emerging patchwork of AI governance and regulation across the country:

Some U.S. state laws aim to regulate specific AI applications, such as deepfakes, profiling, or mental health chatbots. Others propose broader frameworks to manage the risks of "high-risk" or consequential algorithmic decision-making. Collectively, these U.S. state-level initiatives are shaping a patchwork of AI governance across the United States, influencing national debates and challenging companies to comply with a growing array of regulations.
Case Studies: State-Level AI Legislation
The following table summarizes the most notable AI-related legislative developments at the U.S. state level as of June 2025, highlighting the diversity and complexity of emerging AI governance approaches.
Table 1: Summary of U.S. State-Level AI Laws (as of June 2025)
State | U.S. State-Level AI Law(s) Overview |
Arkansas | HB 1958 mandates public AI use policies; HB 1876 grants ownership rights to individuals contributing input to generative AI tools. |
California | AB 1008 integrates AI governance into CCPA enforcement; AB 2602 and AB 1836 address unauthorized use of digital replicas; SB 243 protects minors online. |
Colorado | SB 24-205 targets high-risk AI systems in sectors such as housing and healthcare, mandating risk disclosures and risk management protocols; the AG enforces. |
Connecticut | SB 2 requires fairness, transparency, and labeling of synthetic media, applying to developers and deployers of AI systems. |
Illinois | HB 5399 directs the Board of Higher Education to study AI’s impact on education and research statewide. |
Kentucky | SB 4 authorizes the Commonwealth Office of Technology to develop AI policy standards for public use and procurement. |
Maryland | HB 956 creates a working group to study AI’s impact on consumer protection, private industry, and government operations. |
Montana | SB 212 protects computational resources from government interference and requires AI risk management plans for critical infrastructure. |
New Jersey | Deepfake legislation criminalizes the malicious use, with penalties including up to 5 years' imprisonment and fines of up to $30,000. |
Utah | SB 226 mandates chatbot disclosure; SB 332 restricts political deepfakes; HB 452 bans AI impersonation in mental health interactions. |
West Virginia | HB 3187 establishes a task force to study the economic, legal, and ethical impacts of AI across state sectors. |
These state-led efforts reveal both innovation and fragmentation in U.S. AI policy. While they address urgent privacy and fairness issues, the lack of national alignment underscores the need for a coordinated federal strategy. In the meantime, U.S. states continue to shape the AI landscape, each adding its voice to a complex and evolving regulatory chorus.
U.S. State-Level AI Law Enforcement Actions:
As the U.S. federal government continues to debate the contours of comprehensive AI regulation, several U.S. states have already begun enforcing existing consumer protection, civil rights, and data privacy laws against AI-driven harms. Between 2023 and 2025, attorneys general in several U.S. states launched investigations, issued legal advisories, and settled cases related to deceptive AI marketing, algorithmic discrimination, and improper use of generative technologies.
The following table provides a summary of notable U.S. state-level enforcement actions explicitly taken under, or aligned with, AI-related legal authorities. These actions underscore the evolving role of U.S. states in establishing de facto standards for the responsible accountability and deployment of AI systems.
Table 2: Summary of U.S. State-Level AI Enforcement Actions (as of June 2025)
State | AI Enforcement Action(s) |
California | Attorney General Rob Bonta issued legal advisories affirming that existing state consumer protection, privacy, and civil rights laws apply to AI systems (Bonta, 2025). The California Privacy Protection Agency (CPPA) fined a company $345,178 under the California Consumer Privacy Act for data handling practices potentially relevant to AI-adjacent profiling and automation (Stauss, 2025). |
Massachusetts | In April 2024, Attorney General Andrea Campbell issued an advisory stating that AI is covered under the state's existing consumer protection, anti-discrimination, and data privacy laws. The advisory warns businesses against misrepresenting AI capabilities and using biased systems (Zafar, 2024). |
New Jersey | In January 2025, Attorney General Matthew Platkin launched the Civil Rights & Technology Initiative to address AI-driven discrimination in hiring, housing, and credit decisions (Parker & Manley, 2025). In April 2025, New Jersey enacted a deepfake law that criminalizes malicious synthetic media, with enforcement penalties of up to 5 years’ imprisonment (Nieto-Munoz, 2024). |
Oregon | In December 2024, AG Ellen Rosenblum issued guidance clarifying that AI systems must comply with the state's Unlawful Trade Practices Act and anti-discrimination laws. The guidance emphasizes transparency and fairness in algorithmic design (Rosenblum, 2024). |
Texas | Texas led multiple enforcement actions: (1) Settled with Pieces Technologies over deceptive claims about healthcare (Paxton, 2024b); (2) Launched privacy investigations into Character.AI and others under the SCOPE Act and Texas Data Privacy and Security Act (Paxton, 2024a); (3) Opened an inquiry into DeepSeek (Feb 2025) over potential foreign influence and privacy violations (Smyser et al., 2025). |
U.S. Federal Government AI Legislation: Current Landscape and Implications
While U.S. state governments have taken the lead in regulating AI and data privacy, their efforts have highlighted the urgent need for coherent federal-level action in the United States. As U.S. states adopt divergent standards, the U.S. federal government faces growing pressure to deliver a unified national response that balances innovation with individual rights.
AI is reshaping nearly every sector, from national defense to healthcare to consumer analytics. Unfortunately, the U.S. still lacks a comprehensive national framework to regulate the development, use, and privacy implications of its technology. The U.S. federal government has taken incremental steps through executive orders, agency guidance, and targeted legislation. The absence of both a unified national AI law and a comprehensive U.S. federal data privacy law leaves significant legal and regulatory gaps. It perpetuates uncertainty for both consumers and businesses.
The development of U.S. AI policy has accelerated unevenly across the U.S. federal landscape, with executive actions, agency responses, and Congressional legislation occurring sporadically. The timeline below highlights key milestones for the U.S. federal and state governments between 2023 and 2025. It offers context for the evolving AI governance discourse and the resulting data privacy, legal, and regulatory tensions:

U.S. Presidential Executive Orders and U.S. AI Policy Shifts
In January 2025, President Donald Trump issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence (The White House, 2025a).” This order overturned several components of the prior administration's AI governance agenda, such as the “Blueprint for an AI Bill of Rights (The White House, 2022).” Instead, it emphasized deregulation to encourage domestic AI innovation and competitiveness (The White House, 2025a).
While the executive order positions the U.S. to become a global leader in AI, it has drawn concerns from civil liberties groups for failing to include substantive safeguards on algorithmic accountability, privacy rights, and bias prevention. Critics argue that the executive order prioritizes short-term economic gain over long-term civil protections and data privacy. It achieves little given the absence of binding U.S. AI governance and U.S. federal data privacy laws to counterbalance this deregulatory push.
AI-Focused Legislation in the U.S. Congress
In parallel, the U.S. Congress has introduced piecemeal legislation that addresses specific aspects of AI governance. The U.S. TAKE IT DOWN Act (The White House, 2025a), enacted in 2025, criminalizes the distribution of non-consensual AI-generated explicit images and mandates their removal within 48 hours. It empowers the Federal Trade Commission (FTC) to enforce platform compliance (Killion, 2025).
Another initiative, the U.S. CREATE AI Act (Heinrich, 2023), aims to democratize access to AI research resources by establishing the National AI Research Resource. The bill includes provisions for privacy-enhancing technologies and guidelines for the ethical development of AI (Heinrich, 2025).
While these bills are important, they do not amount to a cohesive U.S. national AI governance strategy or a robust U.S. federal data privacy law. Their issue-specific scope means they fall short of addressing the following systemic challenges, such as algorithmic bias, data misuse, and the lack of transparency in AI decision-making systems.
Fragmented Agency Roles: Currently, no U.S. Federal Government executive branch has overarching jurisdiction over AI. Regulatory responsibilities are split between multiple entities:
The Department of Defense (DoD) governs AI development in military applications through its “Responsible AI” principles, which prioritize traceability, reliability, and oversight (U.S. Department of Defense, 2020).
The Federal Trade Commission (FTC) uses existing consumer protection statutes to pursue deceptive AI marketing practices and undisclosed profiling, but lacks explicit AI regulatory authority (Federal Trade Commission, 2024).
The National Institute of Standards and Technology (NIST) has published its AI Risk Management Framework (NIST, 2023), which is mandatory for US Federal Government executive branch agencies, but not for the private sector.
This fragmented structure underscores the pressing need for a centralized, enforceable U.S. federal framework that unifies data privacy, safety, and ethical governance under a single U.S. legislative framework.
Global Engagement and Strategic Partnerships
The U.S. federal AI strategy has also extended into international arenas. During his 2025 diplomatic visit to the United Arab Emirates (UAE), President Trump announced a $200 billion U.S.–UAE partnership that includes the creation of the UAE–U.S. AI Campus in Abu Dhabi. The facility, led by Emirati tech firm G42, will span ten square miles and scale up to five gigawatts, positioning itself as a global AI compute and data infrastructure hub (US Mission UAE, 2025).
The agreement promises to expand U.S. cloud service reach and AI capabilities. It also raises significant concerns regarding AI governance and data privacy, primarily because vast amounts of user data may now be processed outside of the country.
The lack of a US federal data privacy law governing cross-border AI data flows exacerbates these concerns. It makes international data governance and alignment with U.S. standards an unresolved challenge.
Implications for Data Privacy
The most profound implication of the current U.S. federal posture is its failure to provide a comprehensive national legal framework governing AI and data privacy. Existing federal laws are narrowly tailored to specific sectors and are insufficient for the general-purpose nature of AI. This void has pushed states like California, Colorado, and Connecticut to enact their own AI and privacy laws, resulting in a fragmented legal landscape. Proposals such as the “10-year moratorium on state AI regulation,” embedded in the controversial "One Big Beautiful Bill," have further escalated tensions (Bunn et al., 2025). Notably, even prominent industry figures, such as Anthropic CEO Dario Amodei, have condemned the moratorium, instead calling for strong national standards on transparency, accountability, and safe deployment (Bajwa, 2025).
Without national AI governance and data privacy laws, the U.S. remains vulnerable to uneven enforcement, regulatory arbitrage, and diminished public trust in technology. It stands at a crossroads. The recent U.S. executive and legislative efforts reflect growing federal engagement with AI. Unfortunately, this country cannot consistently regulate AI systems or effectively protect individuals’ data rights without enacting effective federal-level AI governance and data privacy legislation. To address these challenges, a coordinated, bipartisan effort is necessary to enact comprehensive U.S. national AI governance laws and a comprehensive U.S. federal data privacy framework.
Compliance Challenges in a Fragmented AI Regulatory Landscape
The proliferation of AI laws across U.S. states has created a disjointed regulatory environment, presenting compliance obstacles for both organizations and individuals. These challenges span across legal, operational, and ethical dimensions, emphasizing the urgency for unified governance. The fractured nature of AI oversight in the United States stems in part from overlapping responsibilities at the federal and state levels. U.S. federal government agencies, such as the DoD and the FTC, oversee sector-specific applications or voluntary frameworks.
Conversely, U.S. states have implemented a range of laws targeting use cases like deepfakes, generative content, and algorithmic fairness. This overlapping jurisdiction complicates compliance for organizations and creates inconsistent protection for individuals. The following diagram illustrates areas of exclusive and shared legal and regulatory responsibility:

Below is a breakdown of key issues emerging from this fragmented landscape:
Adaptation Delays and Innovation Risks: Organizations must frequently retool internal systems, product features, and data practices to align with varying state-level AI requirements. This constant adjustment delays deployment, slows innovation, and diverts resources from product improvement to compliance adaptation. It is becoming increasingly burdensome for startups and small businesses entering the industry (Stout, 2025).
Compliance Costs: The financial burden of maintaining compliance across divergent legal frameworks is substantial. Multi-jurisdictional organizations often require dedicated compliance teams, frequent legal consultations, and customized workflows. They hope to satisfy disparate AI mandates, which is an increasingly expensive endeavor as more states adopt legislation (MacGregor & Ehrlich, 2025).
Legal Uncertainty: The lack of consistency in definitions (e.g., what constitutes “high-risk AI”) and obligations (e.g., transparency, algorithmic fairness) leads to legal ambiguity. This makes long-term planning challenging and increases the risk of unintentional noncompliance, particularly when laws conflict or impose contradictory standards (Rasmussen, 2025).
Public Awareness and Redress Limitations: Many individuals remain unaware of how AI impacts their rights or what legal avenues exist for recourse. Fragmented rules often mean that consumers in one state may enjoy protections not available in others. Additionally, redress mechanisms like access, correction, or opt-out rights are inconsistently enforced (Castro, 2025).
Vendor and Third-Party Risk Exposure: Businesses that rely on external vendors for AI solutions face compliance challenges when those vendors operate in multiple states. Discrepancies in legal and regulatory obligations can lead to liabilities if third parties fail to meet specific state-based legal requirements (MacGregor & Ehrlich, 2025).
The alphabet soup of U.S. state AI laws has made compliance a multidimensional challenge. Businesses must not only juggle costs and delays but also manage legal ambiguity, vendor liability, and consumer trust. For individuals, the inconsistency in rights and remedies heightens inequity. While proposed U.S. federal preemption offers one path to simplification, a more strategic and coordinated framework is needed. It must strike a balance between national consistency and local adaptability, which may better support the ethical and effective governance of U.S. federal and state AI technologies.
Conclusion: Navigating the Fragmented AI Regulatory Landscape
As AI technologies rapidly evolve, the United States faces a complex and fragmented legal and regulatory landscape. In the absence of comprehensive federal legislation, individual states have enacted a patchwork of laws related to AI. Sadly, they are contributing to legal and regulatory inconsistencies that challenge both innovation and consumer protection.
This decentralized approach has prompted significant debate. Proponents argue that U.S. state-level initiatives enable tailored responses to emerging AI risks, thereby fostering innovation and addressing specific community concerns. For instance, California's recent AI bills aim to regulate chatbots and establish a framework for AI systems, reflecting the state's proactive stance on technology governance (Lopez et al., 2025).
Conversely, critics highlight the challenges posed by this legal and regulatory mosaic. Businesses operating across multiple states must navigate varying compliance requirements, leading to increased costs and operational complexities. Moreover, consumers may experience unequal protections depending on their state of residence, raising concerns about fairness and equity.
In response to these challenges, the U.S. House of Representatives passed a provision within a broader budget reconciliation bill proposing a 10-year moratorium on state AI regulations (Lee et al., 2025). Supporters argue that this pause would prevent a fragmented regulatory environment and allow time to develop comprehensive federal legislation. However, the proposal has faced bipartisan opposition from state lawmakers who argue that it would hinder their ability to protect constituents from AI-related harm (Fox-Sowell, 2025).
The debate highlights the need for cohesive federal regulation that strikes a balance between innovation and ethical considerations. A unified national framework could provide clear guidelines for AI development and deployment, ensuring consistent protection for consumers while fostering technological advancement. Such legislation should incorporate input from diverse stakeholders, including state governments, industry experts, and civil society, to address the multifaceted challenges posed by AI (Anderson et al., 2025).
In conclusion, while U.S. state-level initiatives have played a crucial role in addressing immediate AI concerns, the growing complexity of the AI legal and regulatory landscape necessitates a coordinated federal response in the United States. Establishing comprehensive national AI governance and data privacy legislation will be pivotal in safeguarding consumer rights, promoting innovation, and maintaining the United States' leadership in the global AI arena.
Strategic Questions for Key Stakeholders in AI Governance
For Civil Society and Consumer Advocates
How do we raise public awareness about individuals' rights and risks in an AI-driven society?
Which communities are most vulnerable to algorithmic harm, and how can policy interventions be designed with those groups in mind?
What mechanisms can ensure that public input is meaningfully incorporated into AI governance at both the state and federal levels?
How do we ensure equitable access to AI benefits without deepening existing social, racial, or economic divides?
For Industry and Technology Developers
What internal mechanisms (e.g., AI ethics boards, red-teaming, algorithmic audits) should we implement to anticipate and mitigate regulatory risks?
How can we achieve transparency and explainability without compromising proprietary technologies?
What responsibilities do we have to ensure that our AI systems are not only compliant but also equitable and just?
How can industry collaborate better with regulators and civil society to co-develop responsive, practical AI standards?
Are our vendors and partners prepared to comply with state-specific AI laws, and what contractual safeguards are necessary?
How can supply chain due diligence extend to AI model development and data sourcing?
For Legal and Compliance Professionals
How do we tailor traditional compliance frameworks to address the dynamic, probabilistic nature of AI systems?
What risk-based governance models are most effective for navigating overlapping or conflicting state and federal AI laws?
Should we advocate sector-specific AI rules or pursue a cross-sectoral framework aligned with global standards?
What legal liabilities emerge from AI-generated content, decisions, or interactions, and how can they be preemptively managed?
For Policymakers and Regulators
How can we design federal AI legislation that balances national consistency with flexibility to address localized needs and risks?
What minimum rights and protections should be guaranteed to all individuals, regardless of their state of residence, in the age of AI?
How do we ensure meaningful oversight and accountability in public sector AI use, particularly in areas like law enforcement, healthcare, and education?
What lessons can we learn from state-level experimentation that could inform scalable national policy?
How should AI regulation address cross-border data flows and transnational partnerships like the U.S.-UAE AI Campus initiative?
6. How can regulatory sandboxing support innovation without compromising public safety?
7. What role should state attorneys general play in enforcing AI regulations?
Appendix I: Glossary of Key Terms in AI Governance and Data Privacy
The evolving field of AI governance encompasses a broad range of legal, technical, and ethical considerations. The following glossary defines key terms commonly used in discussions surrounding regulatory, compliance, and policy aspects of AI systems and data privacy.
Term | Definition |
Accountability | Responsibility for the impacts, risks, and outcomes of AI systems; legally, ethically, and operationally. |
Algorithmic Bias | Systematic and repeatable errors in AI result in unfair or discriminatory outcomes. |
Artificial Intelligence (AI) | Systems are designed to perform tasks that typically require human intelligence, such as learning and reasoning. |
AI Literacy | The knowledge and skills necessary to understand, critically evaluate, and interact with AI systems. AI literacy empowers individuals to recognize how AI affects their rights, assess the transparency and fairness of algorithms, and make informed choices about AI use in personal, professional, or civic contexts. It is increasingly seen as a foundational digital competency in democratic and data-driven societies. |
Automated Decision-Making (ADM) | AI systems are making decisions with little to no human input, often in contexts such as hiring, lending, or policing. |
Consent | Informed, voluntary agreement by individuals to the collection or use of their personal data. |
Consumer Profiling | The analysis and categorization of individuals based on behavior or traits for marketing or risk assessment. |
Data Minimization | The principle of collecting only the data is necessary for a specific purpose. |
Data Portability | The right to transfer personal data between services in a structured, machine-readable format. |
Data Protection Impact Assessment (DPIA) | A tool to identify and mitigate potential harms of AI to privacy and rights before deployment. |
Deepfakes | AI-generated media like images, videos, or voices are used to impersonate real individuals, often deceptively. |
Discriminatory Impact | Outcomes from AI systems that disproportionately harm protected groups based on race, gender, or other traits. |
Disclosure Requirement | The obligation to inform users when AI is used in interactions or decision-making. |
Explainability | The ability of an AI system to provide understandable reasoning behind its decisions or predictions. |
Fairness | The principle that AI should treat individuals and groups equitably without systemic bias. |
Generative AI | AI capable of producing new content, such as text, audio, or images, using training data and prompts. |
High-Risk AI System | AI applications that significantly affect rights, access to services, or personal safety. |
Model Ownership | Legal or ethical claims over the rights to an AI model or the data used to train it. |
Opt-Out Rights | User rights to refuse automated processing, profiling, or inclusion in certain AI-related activities. |
Preemption | A legal doctrine where federal laws override or supersede conflicting state legislation. |
Profiling | The inference of personal traits or behaviors by AI from collected data, often without user awareness. |
Redress Mechanism | Processes that allow individuals to challenge or correct harmful AI decisions. |
Risk-Based Approach | A regulatory method that adjusts compliance obligations based on the AI system’s risk level. |
Sensitive Data | Data requiring heightened protection, such as biometric, health, religious, or political information. |
Synthetic Media | Media created or modified by AI, including avatars, voice clones, or altered videos and news. |
Transparency | The principle is that users should be informed about the use of AI, system functionality, and data processing. |
Vendor Risk | Liability exposure stems from third-party AI providers that fail to meet legal or ethical standards. |
References
1. Anderson, H., Comstock, E., & Hanson, E. (2025, March 31). White & Case. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
2. Bajwa, A. (2025, June 5). Anthropic CEO says proposed 10-year ban on state AI regulation “too blunt” in NYT op-ed. Reuters. https://www.reuters.com/business/retail-consumer/anthropic-ceo-says-proposed-10-year-ban-state-ai-regulation-too-blunt-nyt-op-ed-2025-06-05/
3. Bonta, R. (2025, January 13). Attorney Rob Bonta issues legal advisories on the application of California law to AI. State of California Department of Justice. https://oag.ca.gov/news/press-releases/attorney-general-bonta-issues-legal-advisories-application-california-law-ai
4. Bunn, D., Muresianu, A., & McBride, W. (2025, May 23). The good, bad, and the ugly in the One, Big, Beautiful Bill. Tax Foundation. https://taxfoundation.org/blog/one-big-beautiful-bill-pros-cons/
5. Castro, D. (2025, May 30). Fragmented AI laws will slow IT modernization in the US. Information Technology & Innovation Foundation. https://itif.org/publications/2025/05/30/fragmented-ai-laws-will-slow-federal-it-modernization-in-the-us/
6. Chambers and Partners. (2025, February). Texas AG investigates DeepSeek over foreign AI influence concerns. https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2025/usa-texas/trends-and-developments
7. Federal Trade Commission. (2025, January 3). AI and the risk of consumer harm. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2025/01/ai-risk-consumer-harm
8. Fox-Sowell, S. (2025, June 3). State lawmakers push back on federal proposal to limit AI regulation. StateScoop. https://statescoop.com/state-lawmakers-push-back-federal-proposal-limit-ai-regulation/?utm_source=chatgpt.com
9. Heinrich, M. (2023, July 27). S.274 – CREATE AI Act of 2024. Congress.gov. https://www.congress.gov/bill/118th-congress/senate-bill/2714
10. Information Technology and Innovation Foundation (ITIF). (2025, May 30). Fragmented AI laws will slow federal IT modernization in the U.S.https://itif.org/publications/2025/05/30/fragmented-ai-laws-will-slow-federal-it-modernization-in-the-us
11. Killion, V.L. (2025, May 20). The TAKE IT DOWN Act: A federal law prohibiting the nonconsensual publication of intimate images. Congress.gov. https://www.congress.gov/crs-product/LSB11314
Lee, A.R., Loring, J.M., Ryan, G.H., & Walker, J. LLP. (2025, May 28). US House of Representatives advance unprecedented 10-Year moratorium on state AI laws. The National Law Review. https://natlawreview.com/article/us-house-representatives-advance-unprecedented-10-year-moratorium-state-ai-laws?amp=
13. Lopez, N., Keatts, A., & Curi, M. (2025, June 3). California AI bills advance as Congress considers state-level regulation ban. AXIOS San Francisco. https://www.axios.com/local/san-francisco/2025/06/03/california-ai-regulation-senate-chatbots-rights
14. MacGregor, M., & Ehrlich, K. (2025, March 13). Request for information on the development of an artificial intelligence (AI) action plan. SIFMA Asset Management Group. https://www.sifma.org/wp-content/uploads/2025/03/SIFMA-AI-Response-National-Science-Foundation-March-13-2025-final.pdf
15. National Institute of Standards and Technology. (2023, January 26). AI risk management framework. https://www.nist.gov/itl/ai-risk-management-framework
16. Nieto-Munoz, S. (2025, April 2). Governor Munoz signs bill criminalizing deepfakes. New Jersey Monitor. https://newjerseymonitor.com/briefs/governor-murphy-signs-bill-criminalizing-deepfakes/
17. Oregon Department of Justice. (2024, December). Artificial Intelligence guidance on compliance with Oregon laws. https://www.doj.state.or.us/wp-content/uploads/2024/12/AI-Guidance-12-24-24.pdf
18. Parker, K.D., & Manley, C.J. (2025, May 1). New Jersey’s attorney general and division on civil rights starts 2025 with guidance on AI use in hiring. K&L Gates. https://www.klgates.com/New-Jerseys-Attorney-General-and-Division-on-Civil-Rights-Starts-2025-With-Guidance-on-AI-Use-in-Hiring-5-1-2025
19. Paxton, K. (2024a, December 12). Attorney General Ken Paxton launches investigations into Character.AI , Reddit, Instagram, Discord, and other companies over children’s privacy and safety practices as Texas leads the nation in data privacy enforcement. Texas Office of the Attorney General. https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-launches-investigations-characterai-reddit-instagram-discord-and-other
20. Paxton, K. (2024b, September 18). Attorney General Ken Paxton reaches settlement in first-of-a-kind healthcare generative AI investigation. Texas Office of the Attorney General. https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-reaches-settlement-first-its-kind-healthcare-generative-ai-investigation
21. Rasmussen, S. (2025, May 29). The rise of AI regulation across the United States: A complex patchwork of compliance challenges. GRC Report. https://www.grcreport.com/post/the-rise-of-ai-regulation-across-the-united-states-a-complex-patchwork-of-compliance-challenges
22. Rosenblum, E.F. (2024, December 24). What you should know about how Oregon’s laws may affect your company’s use of artificial intelligence. Oregon Department of Justice. https://www.doj.state.or.us/wp-content/uploads/2024/12/AI-Guidance-12-24-24.pdf
Smyser, C., Hudson, E., Stewart, J., & Chavez, H. (2025, May 22). Artificial intelligence 2025. Chambers and Partners. https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2025/usa-texas/trends-and-developments
24. Stauss, D. (2025, May 6). CPPA announces new CCPA enforcement action. Husch Blackwell. https://www.bytebacklaw.com/2025/05/cppa-announces-new-ccpa-enforcement-action/
Stout, K. (2025, May 30). Federal preemption and AI regulation: A law and economics case for strategic forbearance. Washington Legal Foundation. https://www.wlf.org/2025/05/30/wlf-legal-pulse/federal-preemption-and-ai-regulation-a-law-and-economics-case-for-strategic-forbearance/
Securities Industry and Financial Markets Association (SIFMA). (2025, March 13). Response to NSF’s AI framework inquiry. https://www.sifma.org/wp-content/uploads/2025/03/SIFMA-AI-Response-National-Science-Foundation-March-13-2025-final.pdf
Texas Office of the Attorney General. (2024, September). Settlement with Pieces Technologies over deceptive AI marketing. https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-reaches-settlement-first-its-kind-healthcare-generative-ai-investigation
Texas Office of the Attorney General. (2025, January). Launch of AI investigations into Character.AI and others under SCOPE Act. https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-launches-investigations-characterai-reddit-instagram-discord-and-other
The White House. (2025a, May 19). ICYMI: President Trump signs TAKE IT DOWN act into law. https://www.whitehouse.gov/articles/2025/05/icymi-president-trump-signs-take-it-down-act-into-law/
The White House. (2025b, January 23). Removing barriers to American leadership in artificial intelligence. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
The White House. (2022). Blueprint for an AI Bill of Rights. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/
U.S. Department of Defense. (2020). Responsible AI. Chief Digital and Artificial Intelligence Office. https://www.ai.mil/Initiatives/Responsible-AI/
US Mission UAE. (2025, May 17). UAE and US presidents attend the unveiling of phase 1 of new 5GW AI campus in Abu Dhabi. U.S. Embassy & Consulate in the United Arab Emirates. https://ae.usembassy.gov/uae-and-us-presidents-attend-the-unveiling-of-phase-1-of-new-5gw-ai-campus-in-abu-dhabi/#:~:text=Abu%20Dhabi%2C%20UAE%3B%20May%2015,President%20of%20the%20United%20States
Washington Legal Foundation (WLF). (2025, May 30). Federal preemption and AI regulation. https://www.wlf.org/2025/05/30/wlf-legal-pulse/federal-preemption-and-ai-regulation-a-law-and-economics-case-for-strategic-forbearance/
Zafar, S. (2024, April 16). AG Campbell issues advisory providing guidance on how state consumer protection and other laws apply to artificial intelligence. Mass.gov. https://www.mass.gov/news/ag-campbell-issues-advisory-providing-guidance-on-how-state-consumer-protection-and-other-laws-apply-to-artificial-intelligence
Comments