Local Governments Consider Policies to Guide AI in the County/Municipal Workplace

Local governments — as employers and policymakers — are considering the role of artificial intelligence and how it might shape the delivery of public services and the work of county/municipal staff.

Artificial intelligence (“AI”) is rapidly growing in virtually all aspects of 21st-century living, including goods production, transportation, education, and even constituent and public services. Counties and municipalities around the country are formulating AI policies to help deliver public services, respond to resident requests, and streamline work processes for local government staff.

The National Association of Counties has even formed an Artificial Intelligence (AI) Exploratory Committee to examine emerging policies, practices, and potential applications and consequences of AI. “We are at a unique moment in terms of artificial intelligence,” NACo Associate Legislative Director Seamus Dowdall said. With all the attention AI is getting, the committee plans “to explore the emerging policies, practices, potential applications, rules and consequences of artificial intelligence through [the] lens of county governments.”

As counties, “we are community conveners in some ways [and] data aggregators in other ways—there’s a lot of different ways we’re looking at [AI],” Dowdall said. “Using AI as a tool is one component of this conversation, but there’s really a much broader approach that we want to take to think holistically about how AI is progressing.”

With AI opportunity also comes risk, however. Route Fifty recently described the AI dilemma many local officials are facing:

Artificial intelligence is a valuable tool for agencies automating mundane tasks and conducting data analytics, but recent calls for its regulation emphasize the technology’s potential privacy and security risks when not used responsibly.

Local governments create policies to regulate AI in the workplace.

While NACo and the federal government consider the future of AI, several local governments have already adopted policies to guide its usage in local government services and workplaces. Some examples include:

  • King County, Washington
    • King County, Washington, is considering using AI to respond to constituent requests and relieve some of the burden on county staff: “The technology could potentially be used to better field residents’ online queries, provide more personalized experiences, and reduce administrative burdens on staff.” “Somebody could start typing, ‘I am building a garage, and I am not sure what I am supposed to do.’ Generative AI can give you a more detailed response of, ‘Oh, you need a permit, and you need to go here to get it, and here’s how much it will cost,’” County CIO Megan Clarke said.
  • Allegheny County, Pennsylvania
    • The Allegheny County Department of Human Services (DHS) has used the Allegheny Family Screening Tool (AFST) to enhance the “child welfare call screening decision-making process with the singular goal of improving child safety.”
    • The AFST is a predictive risk modeling tool that rapidly integrates and analyzes hundreds of data elements for each person involved in an allegation of child maltreatment. The tool can rapidly integrate and analyze these data housed in the DHS Data Warehouse and create a synthesized visualization of the information.
    • The result is a ‘Family Screening Score’ that predicts the long-term likelihood of future involvement in child welfare. By combining the insight gained through the score with other traditionally gathered information, a better prediction can be made of the long-term likelihood that the child will need to be removed from the home in the future.
  • Seattle 
    • The Seattle IT Department recently issued an interim policy for city staff who wish to use generative AI like ChatGPT to streamline workflows or improve service delivery.
    • Under the policy, the city’s IT Department must approve staff members’ access to or acquisition of new generative AI products.
    • Employees must also validate the information generated by AI systems, which may produce false or misleading results. Officials said that city staff should review AI outputs for accuracy, proper attribution, and biased or offensive material.
    • Seattle city employees are also prohibited from feeding generative AI systems “sensitive or confidential data, including personally identifiable data about members of the public.”
    • Seattle’s interim policy lasts through October, after which it must be extended or replaced.
  • Boston
    • Boston is actively encouraging its staff to test out generative AI tools while taking precautions, releasing interim guidelines in May.
    • Under the guidelines, city staff are to fact-check and review all content generated by AI, mainly if it will be used in public communication or decision-making;
    • Disclose that AI was used to generate the content. The guidance reads, “You should also include the version and type of model you used (e.g., Open AI’s GPT 3.5 vs Google’s Bard). You should include a reference as a footer to the fact that you used generative AI.”; and
    • Staff cannot share sensitive or private information in the prompts.

Maryland counties make decisions about AI

In Maryland, local governments have considered several AI policies in recent years. For example, just this March, Charles County Public Schools adopted AI to detect firearms in public school facilities. In 2021, the Baltimore City Council passed legislation to ban the use of facial recognition technology by public and private users – with an exception for the Baltimore City police, technically a state agency not under City control.

Additionally, the University of Maryland recently received significant funding to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks while simultaneously empowering and educating the public increasingly fascinated by the recent rise of eerily human-seeming applications like ChatGPT.