top of page
Search

Silent Shapers: The Role of United States Local and State Governments in Artificial Intelligence Governance

Silent Shapers: U.S. State and Local Governments Taking the Lead
Silent Shapers: U.S. State and Local Governments Taking the Lead

📘 Executive Summary

AI is already reshaping how public services are delivered and how government decisions are made in communities across the U.S. As adoption accelerates, the absence of a national governance framework has left critical oversight gaps. In this context, U.S. state and local governments are taking the lead in establishing legal and regulatory standards. Additionally, they are implementing procurement requirements and regulatory safeguards for the use of AI in the public sector.


U.S. states such as California, Colorado, Texas, Utah, and Montana have enacted legislation addressing algorithmic transparency, high-risk systems, and public accountability. These include the California Artificial Intelligence Transparency Act (Ropple et al., 2025), the Colorado Artificial Intelligence Act (Rice et al., 2024), the Texas Responsible Artificial Intelligence Governance Act (Nahra et al., 2025), and the Utah Artificial Intelligence Policy Act (State of Utah Legislature, 2024). Cities are also acting. New York City’s Local Law 144 (NYC Consumer & Work Protection, 2021) requires bias audits for automated hiring tools, while Grove City, Ohio, has adopted an AI local governance integration policy (Edinger, 2024). These measures aim to ensure that AI is deployed in administration, benefits administration, education, employment, and policing to serve the public interest.


However, the quality and consistency of implementation vary widely. Some jurisdictions have adopted enforceable rules with precise oversight mechanisms. Others remain exposed to untested systems and vendor-driven deployments, lacking the institutional capacity to govern complex technologies. The result is a fragmented policy landscape, where access to meaningful protections depends more on geography than principle.

This article examines how local and state governments are responding to the growing need for AI oversight. It highlights emerging legislation, identifies persistent risks, and proposes strategies to support effective, coordinated, and accountable governance. Governing the use of AI is no longer a matter of optional policy. It is a foundational challenge for democratic integrity and public trust in the digital age.


💡 Key Insights

AI is being adopted faster than it is being governed. This imbalance is particularly concerning in the public sector, where systems are employed to deliver services and make decisions that impact rights and outcomes. The following insights not only explain what is happening, but also why it matters. They also emphasize the importance of local and state leaders acting with clarity, purpose, and a sense of urgency.

  1. AI Systems Increasingly Shape Public Services with Little Transparency: These tools make or influence decisions in policing, education, and public benefits, yet oversight mechanisms are often nonexistent.

  2. Federal Absence Has Forced Subnational Governments into the Lead: Without national coordination, U.S states and cities are developing their own rules. These rules are often enacted without coordination, shared guidance, funding, or policy support.

  3. Gaps in Capacity and Legal Clarity are Limiting Local Governance: Many jurisdictions lack dedicated staff, technical expertise, or legal infrastructure to manage AI responsibly.

  4. Rules Vary Dramatically Across the Country: U.S. state and local jurisdictional legislation is uneven, and many localities lack enforceable standards, leaving residents vulnerable to inconsistent protections.

  5. Vendor-Driven Deployments are Outpacing Public Oversight: Companies often provide technology, frameworks, and data without implementing accountability safeguards, leaving the public interest at risk.


These insights highlight a central challenge: AI systems are increasingly influencing government functions at all levels, yet public institutions are still adapting. The following section defines core terms to help policymakers and readers alike build a shared understanding of this evolving landscape.


📥 Introduction

AI is no longer an experimental technology in government. It is rapidly becoming an integral part of the operational infrastructure of cities, counties, and states across the U.S. Public agencies now utilize automated systems to allocate police patrols, process benefits, analyze student performance, and manage traffic and emergency responses (OECD, 2025). These tools are reshaping how services are delivered and influencing government decision-making.


Despite this growth, governance remains uneven and underdeveloped. Many state and local agencies deploy AI systems without apparent oversight, transparency requirements, or accountability mechanisms (Dwyer & Anex-Ries, 2025). Many local governments rely on standard procurement contracts to acquire AI systems, often adopting vendor-supplied terms with slight modification. This shifts governance from public law to private agreements, limiting transparency and flexibility. Recent research has demonstrated that this contract-based approach influences how cities define fairness, risk, and accountability in the deployment of AI. It often occurs without formal legislation or community input (Johnson et al., 2025).


Local governments are no longer passive adopters of technology. They are regulators, institutional buyers, and stewards of public data. Their decisions shape how AI operates in education, healthcare, public safety, and social welfare. However, many municipalities still lack the necessary staff, legal frameworks, and technical capacity to effectively evaluate or monitor these systems (IAPP, 2025). This lack of institutional maturity increases the likelihood of bias, discrimination, and the misuse of automated systems.


At the U.S. federal level, multiple executive orders and bills were introduced in 2025, including U.S. Presidential Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," the proposed “AI and Critical Technology Workforce Framework Act, and the “Winning the Race: America’s AI Action Plan” (Congress.gov, 2025; Executive Office of the President of the United States, 2025; Federal Register, 2025). They signal recognition of these challenges. However, no comprehensive national framework has emerged in the U.S. In this absence, U.S. state and local governments are taking the lead, establishing their own governance mechanisms, ethics guidelines, and procurement requirements. States, including California, Colorado, Texas, and Utah, have enacted laws to regulate high-risk AI systems. Cities such as New York and Boston have adopted policies requiring algorithmic transparency and bias audits (IAPP, 2025).


This article examines how subnational governments are emerging as central actors in the governance of AI. It examines their successes, highlights their constraints, and proposes strategies to enhance oversight and coordination. To build a shared understanding of this emerging governance landscape, the following section defines key terms that underpin responsible and accountable artificial intelligence policy.


📚 Key Terms

Understanding AI governance begins with shared language. Policymakers, researchers, and public administrators require consistent terminology to evaluate AI risks, draft legislation, engage with vendors, and inform the public. The following definitions provide essential context for interpreting the governance challenges discussed in this article.

  1. Algorithmic Impact Assessment: A structured evaluation of how an AI system may affect individuals or groups, especially in terms of fairness, accuracy, bias, and legal compliance. Often used to assess public sector deployments before or after launch.

  2. Artificial Intelligence Governance: The combination of laws, institutional practices, and oversight mechanisms designed to ensure that AI systems are used responsibly, transparently, and in alignment with public values.

  3. Data Controller: An entity that determines how and why data is collected and used. In the public sector, this is often a city agency, school district, or state department deploying AI tools.

  4. Data Processor: An organization or company that handles data on behalf of a controller. Many AI vendors fall into this category, raising questions about contractual terms and liability.

  5. Local Government: A city, county, or regional administrative body responsible for services such as policing, education, public health, and housing. Many services now rely on or interact with AI tools.


These terms are more than definitions. They shape how decisions are made, how systems are evaluated, and how accountability is distributed across actors in the AI ecosystem. With this foundation in place, we turn next to the structural forces shaping AI policy at the local level.


⚖️ Legal Applicability Versus Practical Enforcement

Although U.S. federal AI legislation remains incomplete, many state and local governments have taken the lead in crafting their own governance measures. These efforts vary significantly in scope, enforcement, and focus. Table 1 provides a sample of recently enacted laws and policies across several jurisdictions, highlighting the diversity of approaches and the growing momentum of subnational AI governance in the U.S.


Table 1: Selected U.S. State and Local AI Governance Measures 

Jurisdiction

Law or Policy Name

Year Enacted

Focus Areas

California

Artificial Intelligence Transparency Act

2024

AI transparency, content disclosures, consumer protection

Colorado

Artificial Intelligence Act

2024

High-risk AI regulation, deployer and developer obligations

Grove City, Ohio

Local AI Governance Integration Policy

2024

Ethical use guidance, operational guardrails for city AI

Montana

Algorithmic Accountability Framework

2025

Risk assessments, procurement standards, and transparency

New York City

Local Law 144 on Automated Employment Decision Tools (AEDTs)

2021

Bias audits, transparency in algorithmic hiring tools

Santa Cruz County, CA

Artificial Intelligence Appropriate Use Policy

2023

Internal AI use guidance, employee responsibilities

Texas

Responsible Artificial Intelligence Governance Act

2025

AI procurement, agency use, oversight coordination

Utah

Artificial Intelligence Policy Act

2024

Public sector AI, deployment requirements, and enforcement authority

Source Note: Sources: IAPP, 2025; WilmerHale, 2025; Jones Day, 2024; GovTech, 2023; County of Santa Cruz, 2023; NYC DCWP, 2021


Earlier sections have detailed recent federal efforts, which include U.S. presidential executive orders, draft legislation, and agency guidance. Unfortunately, the fact remains that the United States still lacks a binding national framework for governing AI. These initiatives signal the U.S. federal government's awareness of the challenge. However, these efforts have not yet resulted in enforceable statutory standards that apply across jurisdictions or sectors (Congress.gov, 2025; Federal Register, 2025). Agency-specific actions from the Office of Management and Budget and other executive branch bodies have provided technical guidance. Conversely, they lack the force of law, uniformity, and permanence (White House, 2025). In practice, this leaves U.S. state and local governments to interpret, implement, and often invent their own AI governance structures.


Many municipalities struggle with limited technical staff, procurement experience, or legal capacity to govern AI systems robustly. Research indicates that procurement norms frequently leave cities reliant on vendor contract templates, thereby limiting local flexibility oversight (Johnson et al., 2025). Meanwhile, public sector information technology surveys indicate that many respondents cite AI skill shortages as a top barrier to deployment (Salesforce, 2024). Together, these constraints can lead to decisions driven by vendor convenience rather than public accountability or legal clarity.


Moreover, AI systems are being deployed in high-impact areas such as policing, benefits eligibility, and housing without essential algorithmic impact assessments, transparency audits, or community oversight (OECD, 2022). U.S. federal enforcement agencies have not yet consistently developed the capacity to monitor these deployments at the U.S. subnational levels. This governance and enforcement gap has resulted in a patchwork of protection and oversight. Federal harmonization has stalled in part due to political polarization, the pace of AI innovation, and ongoing debates over regulatory scope and institutional authority. Without federal harmonization, U.S. state-by-state regulation will continue to prevail, reinforcing a dynamic in which cities and counties act as de facto regulators.


The uneven distribution of legal authority, resources, and expertise across jurisdictions results not only in fragmented governance but also in practical constraints that limit effective oversight and control. While these local responses reflect early leadership, they also expose the structural limits of decentralized governance. The following section examines how the autonomy built into the U.S. federal system contributes to fragmented oversight, inconsistent protections, and gaps in regulatory enforcement across jurisdictions.


🗺️ Fragmentation Across U.S. State and Local Jurisdictions

AI regulation across the United States remains deeply uneven. Because U.S. states and municipalities possess significant legal autonomy, governance has evolved in a decentralized and often inconsistent manner (Dwyer & Anex-Ries, 2025). Some states, such as California and Colorado, have enacted comprehensive frameworks to regulate high-risk AI systems and promote algorithmic transparency (IAPP, 2025). In contrast, many others have taken limited or no meaningful legislative action (Multistate.Ai, 2025).

At the municipal level, cities have diverged in their approaches to governing AI.


For example, San Francisco adopted an ordinance in 2019 banning the use of facial recognition technologies by city agencies. It marked an early attempt to set enforceable boundaries for public sector AI use (San Francisco Police Department, 2019). The ban represents a governance decision rooted in concerns about transparency, bias, and surveillance. Other cities, by contrast, have embraced similar technologies without meaningful oversight, highlighting the disparity in local approaches to risk, accountability, and civil protections.


Meanwhile, New York City implemented Local Law 144, which mandates independent bias audits and transparency requirements for automated employment decision tools (Nixon Peabody, 2023). However, in neighboring jurisdictions, including both states and cities, there may be no comparable policy in place. This patchwork of regulations has left residents subject to dramatically different levels of protection, depending on their geography. It also complicates compliance for vendors and weakens consistency in how fairness, accountability, and public interest are defined and enforced (OECD, 2025).


🚧 Key Challenges

State and local governments face numerous challenges in effectively governing artificial intelligence. Some of these challenges are well-documented. Others are emerging and require further empirical study. These barriers hinder the ability of public institutions to anticipate risks, enforce accountability, and ensure the responsible deployment of AI. Without addressing them, even well-intentioned oversight efforts may fall short of protecting the public interest. Figure 1 illustrates some of the most common governance gaps faced by local governments in the U.S., including staffing limitations, the absence of formal policies, and weak audit infrastructure.

Figure 1: Gaps in AI Governance Capacity


Source: Adapted from Carnegie Mellon University (2025); Dwyer & Anex-Ries (2025); Johnson et al. (2025); Stone (2025).
Source: Adapted from Carnegie Mellon University (2025); Dwyer & Anex-Ries (2025); Johnson et al. (2025); Stone (2025).

The following capacity gaps underscore the need for targeted policy, institutional support, and investment in local AI governance infrastructure.

  1. Legal and Regulatory Ambiguity: In the absence of unified U.S. federal legislation or consistent state policy, local agencies often operate in a gray area. Some municipalities must interpret overlapping state and federal laws or face unclear liability when AI systems generate harmful outcomes. While no centralized dataset quantifies this ambiguity, qualitative studies and practice guides have emphasized the challenge of navigating legal gaps in AI oversight (Dwyer & Anex-Ries, 2025).

  2. Resource and Capacity Constraints: Municipalities often lack the funding or staff necessary to perform algorithmic risk assessments, audits, or public engagement. In interviews and case studies, local officials describe difficulty scaling oversight using existing governance structures (Carnegie Mellon University, 2025).

  3. Skills and Institutional Readiness Gaps: Effective governance of AI requires knowledge of data science, statistics, systems design, and algorithmic accountability. Most local governments lack staff trained in these areas, and few offer structured education or policy toolkits to bridge the gap (Carnegie Mellon University, 2025).

  4. Vendor Dependence and Procurement Limitations: Because most public agencies acquire AI systems through traditional procurement channels, they often rely on vendor-drafted contracts that limit transparency or oversight. A recent empirical study, based on interviews with U.S. local government officials, demonstrates how contract-based governance influences AI accountability and restricts flexibility (Johnson et al., 2025).


These governance challenges do more than strain capacity. They create conditions where real-world harms can emerge. The following section examines specific examples of how these risks have materialized in practice, particularly in high-impact sectors where AI is shaping decisions about public services and community outcomes. The following section examines real-world examples that illustrate how these risks manifest in the deployment of AI in municipal settings.


⚠️ Overlooked Areas of Risk

As U.S. state and local governments expand their use of AI, these systems are increasingly influencing decisions in public services, including policing, transportation, education, and citizen engagement. While some jurisdictions have adopted policies for AI use, transparency guidelines, or ethical frameworks, many are still navigating these issues without consistent institutional structures. A 2025 report from the Center for Democracy and Technology reveals that some cities are developing AI governance models. Others still lack formal processes for risk management, transparency, and accountability (Dywer & Anex-Ries, 2025).


Procurement remains a critical inflection point in this governance landscape. Many cities acquire AI systems through legacy procurement processes that do not adequately address algorithmic risk, fairness requirements, or obligations. As Johnson et al. (2025) demonstrate, local procurement officers frequently rely on preexisting contract templates. These contracts often lack the necessary negotiation authority or institutional support to ensure effective oversight of public sector AI. This contract-based dynamic can shift decision-making power away from public institutions and toward vendors, resulting in inconsistent governance outcomes.


U.S. states and local governments are deploying AI technologies such as chatbots and generative AI to enhance service delivery and resident engagement. For example, Phoenix’s myPHX311 chatbot enables residents to report issues, pay bills, and access city services. The city developed the program in partnership with Arizona State University and Amazon Web Services (Peters, 2023). News reporting also confirms that Phoenix has approved generative AI use cases in public engagement, process automation, and infrastructure support (Koch, 2025).


Meanwhile, companies like App Maisters have marketed similar AI solutions to local governments, including multilingual bots and infrastructure analytics tools (App Maisters, 2025). While these applications can increase efficiency, they also raise important questions about oversight and accountability. Without policies to evaluate fairness, data handling, and transparency, municipal deployments may outpace the governance frameworks needed to ensure accountability.


In public safety, predictive analytics tools have been deployed to allocate police resources and assess the risk of criminal activity. Civil rights groups have criticized these tools for relying on biased historical data, reinforcing systemic inequities (Yale Law School, 2022; NAACP, 2022). Transparency into how such systems are audited, governed, or corrected when they produce unfair outcomes is often lacking at the municipal and local governmental levels. Even in administrative tasks, AI systems are being used in public agencies where oversight is incomplete. For instance, counties and state offices are piloting machine learning in intelligent document processing to extract structured data from medical examiner reports (Stone, 2025). These systems promise to reduce errors and the pace of manual work, but many implementations lack public documentation of audit protocols or standards for transparency.


For example, Santa Cruz County, California, adopted an Artificial Intelligence Appropriate Use policy in 2023, formalizing guidelines for how county employees may employ AI tools in operations (County of Santa Cruz, 2023; York, 2023). Some cities and counties are experimenting with AI governance frameworks; however, many do not yet have comprehensive oversight mechanisms in place. They often rely on general internal policies or publicly posted guidelines (Dwyer & Anex-Ries, 2025). These nascent examples illustrate a larger tension: without more robust and consistent governance, AI deployments, no matter how well-intended, may replicate bias, degrade transparency, or erode public trust.


Their role in risk identification and mitigation is increasingly critical because local governments often make the earliest decisions about AI in public life. Addressing these vulnerabilities requires more than awareness. It demands coordinated and well-resourced governance strategies tailored to the local and state levels.


🔧 Toward Harmonized Oversight

As AI systems become more embedded in U.S. public governance, state and local leaders must strengthen oversight to ensure these technologies serve the public interest. While some jurisdictions have passed forward-looking legislation, most still lack consistent standards, operational guidance, or institutional capacity. To enhance oversight and coordination, it is crucial to comprehend how responsibilities are allocated across various levels of government and public institutions. Figure 2 illustrates the roles played by federal agencies, state governments, local jurisdictions, vendors, and civil society in the governance of AI. This multilevel landscape is complex, but recognizing each actor’s potential contribution is essential for building coherent and effective oversight.


Figure 2: Multilevel AI Governance Roles in the U.S. Context

Source Note: Adapted from Dwyer & Anex-Ries (2025); IAPP (2025); GovTech (2023); Stone (2025)
Source Note: Adapted from Dwyer & Anex-Ries (2025); IAPP (2025); GovTech (2023); Stone (2025)

To support more effective, democratic governance of public sector AI, this article recommends five priority actions tailored to the state and local context:

  1. Create publicly accessible AI registries at the state or city level. These registries would list active AI systems in government use, their purposes, responsible agencies, and contact points for accountability and redress.

  2. Develop state-supported governance toolkits for municipalities. These toolkits should include procurement guidelines, model ordinances, auditing protocols, and templates for engaging with vendors and communities.

  3. Establish intergovernmental coordination mechanisms, such as state-local AI oversight working groups. They can help promote knowledge sharing, policy alignment, and regional accountability practices.

  4. Incorporate public engagement and transparency into all stages of AI implementation, with communities actively participating throughout the implementation phase. They should be consulted not only after deployment, but during policy design, vendor selection, and performance evaluation.

  5. Require algorithmic impact assessments for all high-impact systems used in public services. These assessments should evaluate risks related to bias, accuracy, civil rights, and transparency before procurement or deployment.


These recommendations acknowledge the complexity of AI governance while also emphasizing its urgency. Without coordinated action, fragmentation will deepen, and public trust will erode. By investing in capacity, legal clarity, and participatory governance, state and local governments can lead with integrity and transparency. They can also help shape the future of ethical AI in the public sphere. The following takeaways consolidate the article’s findings, reinforcing why state and local governments must remain central actors in shaping public sector AI governance.


📌 Key Takeaways

U.S. State and local governments are now at the center of how AI is governed. As AI systems become embedded in public services, the following takeaways summarize the most urgent findings from this analysis. These points underscore why subnational governance matters and what is at stake if it is absent.

  1. Accountability must be a Priority: Without clear oversight, AI systems can entrench bias, automate discrimination, or undermine public confidence in government decisions.

  2. Coordination is Essential to Reduce Fragmentation:  U.S. states, cities, and municipalities must align policies where possible to avoid a patchwork of conflicting or incomplete protections.

  3. Governance Leadership is Already Local: In the absence of U.S. federal laws and regulations focused on AI, U.S. cities and states are defining what responsible AI use looks like in real-world public services.

  4. Public Trust Depends on Inclusion and Transparency: Residents must understand how AI is used in their communities and have access to mechanisms for questioning or appealing its decisions.

  5. Urgent Action is Needed: AI is no longer a future concern. Oversight frameworks must evolve now to meet the demands of rapid deployment and public accountability.


These lessons point to a broader imperative: redefining the role of subnational institutions not only in policy adoption, but also in setting national expectations for the ethical use of AI.


🔚 Conclusion

AI is no longer reshaping government in theory. It is quietly, rapidly, and often invisibly reshaping government in practice. In city halls, state agencies, and public institutions across the United States, AI systems are making decisions that affect freedom, opportunity, and trust. Nevertheless, the rules that govern those systems remain inconsistent, underdeveloped, and in many cases, absent.


U.S. state and local governments have not waited. In the absence of federal direction, they are building the first generation of public sector AI governance. It is occurring one procurement policy, one biased audit, one ordinance at a time. These efforts matter. They are defining the boundaries of what is acceptable, accountable, and equitable in the use of AI for public purposes.


However, these efforts cannot remain isolated. Without stronger coordination, clearer standards, and sustained investment, the future of AI governance will remain as fragmented as its present state. U.S. cities and states must be supported not only in what they are doing, but in what they can build together.


AI is transforming the relationship between the government at various levels and the governed in the U.S. Whether it does so in ways that protect dignity, preserve rights, and reinforce democracy will depend on how the U.S. chooses to govern now.


❓ Key Questions for Stakeholders

As AI becomes a core feature of public decision-making, critical questions arise for those who shape, fund, implement, and evaluate its use. The following questions reflect the diverse responsibilities of AI governance actors and present questions designed to encourage reflection, coordination, and accountability.

Civil Society, Advocates, and Researchers

  1. How can independent organizations ensure that AI deployment in public agencies aligns with community values and democratic goals?

  2. In what ways can journalism, academic research, and legal advocacy uncover risks and inform stronger policy at the local and state level?

  3. What forms of public oversight (e.g., advisory committees, participatory audits, civic engagement tools, etc.) should be built into the AI governance process?

Local Government Leaders

  1. How can procurement, contracting, and implementation processes be improved to prioritize ethical AI practices?

  2. In what ways can local officials incorporate public input and community oversight into AI-related decisions?

  3. What AI systems are currently used in your jurisdiction, and how are their risks, impacts, and legal compliance evaluated?

State Policymakers

  1. How can states help build technical capacity and legal clarity without undermining local innovation or discretion?

  2. Should states establish oversight entities or frameworks to monitor AI use in public agencies and resolve disputes?

  3. What policy tools can states provide to support consistent, enforceable AI governance across municipalities?

Technology Vendors and Developers

  1. How are AI products tested for fairness, accuracy, and transparency before being deployed in public systems?

  2. How can companies collaborate with governments to build trust, support public goals, and protect civil rights?

  3. What responsibilities do vendors have to ensure their technologies comply with local and state laws and values?

U.S. Federal Agencies and The U.S. Congress

  1. How should federal funding be structured to support subnational AI governance without overstepping local authority?

  2. What national standards are necessary to ensure that AI use in government is fair, transparent, and accountable across all jurisdictions?

  3. What role should federal agencies play in building technical capacity, issuing guidance, or overseeing vendor practices?


📎 References

1.    App Maisters. (2025, July 3). Local government’s use of AI and its role. https://gov.appmaisters.com/local-government-use-ai-and-its-role/

2.    Carnegie Mellon University. (2025, April). Procuring public sector AI: Guidance for local governments. University of Pittsburgh Institute for Cyber Law, Policy, and Security. https://www.cyber.pitt.edu/sites/default/files/AI/Procuring%20Public-Sector%20AI.pdf

3.    Congress.gov. (2025). S.1290: Artificial Intelligence and Critical Technology Workforce Framework Act of 2025. https://www.congress.gov/bill/119th-congress/senate-bill/1290

4.    County of Santa Cruz. (2023, September 19). Board of supervisors votes to adopt AI policy. https://www.santacruzcountyca.gov/portals/0/county/CAO/press%20releases/2023/AIPolicy.09192023.pdf

5.    Dwyer, M., & Anex-Ries, Q. (2025, April 15). AI in local government: How counties & cities are advancing AI governance. Center for Democracy and Technology. https://cdt.org/insights/ai-in-local-government-how-counties-cities-are-advancing-ai-governance

6.    Edinger, J. (2024, July 23). Small city, big potential: How one Ohio city is tackling AI. Government Technology. https://www.govtech.com/artificial-intelligence/small-city-big-potential-how-one-ohio-city-is-tackling-ai

7.    Executive Office of the President of the United States. (2025, July). Winning the race: America’s AI Action Plan. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

8.    Federal Register. (2025, January 23). Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence. https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence

9.    International Association of Privacy Professionals (IAPP). (2025, October 8). U.S. State Artificial Intelligence Governance Legislation Tracker. https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker

10. Johnson, N., Silva, E., Leon, H., Eslami, M., Schwanke, B., Dotan, R., & Heidari, H. (2025, May 13). Legacy procurement practices shape how U.S. cities govern AI: Understanding government employees’ practices, challenges, and needs. arXiv. https://doi.org/10.48550/arXiv.2411.04994

11. Koch, M. (2025, May 21). How AI already impacts daily lives of Arizonans. AZFamily. https://www.azfamily.com/2025/05/21/how-ai-already-impacts-daily-lives-arizonans/

13. Nahra, K.J., Evers, A., Jessani, A.A., & Ojuola, T. (2025, July 21). Texas enacts new AI law. WilmerHale. https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250721-texas-enacts-new-ai-law

14. NAACP. (2022). Issue Brief: Artificial intelligence in predictive policing issue brief. https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief

15. NYC Consumer and Worker Protection. (2021). Automated employment decision tools (AEDT). https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page

16. Organisation for Economic Co-operation and Development (OECD). (2025, September 18). Governing with Artificial Intelligence: The state of play and way forward in core government functions. https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en.html

17. Organisation for Economic Co-operation and Development (OECD). (2022). The OECD programme on smart cities and inclusive growth. https://www.oecd.org/cfe/cities/smart-cities.htm

18. Peters, G. (2023, June 26). Easily scalable chatbots facilitate fast citizen service. StateTech. https://statetechmagazine.com/article/2023/06/easily-scalable-chatbots-facilitate-fast-citizen-service

19. Rice, T., Lamont, K., & Francis, J. (2024, July). The Colorado Artificial Intelligence Act. Future of Privacy Forum Legislation Working Group. https://leg.colorado.gov/sites/default/files/images/fpf_legislation_policy_brief_the_colorado_ai_act_final.pdf

20. Ropple, L.M., Kukkonen III, C.A., Meyers, M.A., Paez, M.F., & Tait, E.J. (2024, October). California enacts AI transparency law requiring disclosures for AI content. JonesDay. https://www.jonesday.com/en/insights/2024/10/california-enacts-ai-transparency-law-requiring-disclosures-for-ai-content

21. San Francisco Police Department. (2019). 19B Surveillance Technology Policies. https://www.sanfranciscopolice.org/your-sfpd/policies/19b-surveillance-technology-policies

22. Stevens, C., & Holmes, J. (2023, November 13). Complying with New York City’s bias audit law. Nixon Peabody. https://www.nixonpeabody.com/insights/alerts/2023/11/13/complying-with-new-york-city-bias-audit-law

23. State of Utah Legislature. (2024, May 1). Artificial Intelligence Policy Act. https://le.utah.gov/xcode/Title13/Chapter72/C13-72_2024050120240501.pdf

24. Stone, A. (2025, April 21). State and local agencies deploy artificial intelligence for document processing. StateTech. https://statetechmagazine.com/article/2025/04/state-and-local-agencies-deploy-artificial-intelligence-document-processing

25. Yale Law School. (2022). Algorithmic accountability: The need for a new approach to transparency and accountability when government functions are performed by algorithms. Media Freedom & Information Access Clinic. https://law.yale.edu/sites/default/files/area/center/mfia/document/algorithmic_accountability_report.pdf

26. York, J.A. (2023, September 20). Santa Cruz County, California, formalizes AI policy. Government Technology. https://www.govtech.com/policy/santa-cruz-county-calif-formalizes-ai-policy

 
 
 

Comments


bottom of page