2024 Issue Preview: Artificial Intelligence

With the 2024 Legislative Session rapidly approaching, MACo is profiling some significant issues that stand to gather attention in the General Assembly. Artificial Intelligence (AI) is poised to transform the world and how we live, work, and communicate.

The National Artificial Intelligence Initiative Act of 2020  defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

For state and local governments, AI can mean streamlined processes, enhanced service delivery, improved public safety and security, fraud detection and prevention, data analytics, and regulatory compliance. Perhaps most importantly, this rapidly evolving technology can revolutionize resident engagement and input.

However, implementing and regulating AI comes with challenges, including concerns about privacy, security, oversight, equity, accessibility, and bias — and risks are especially acute in sectors like human resources, healthcare, criminal justice, and financial services.

Proponents of regulatory frameworks raise concerns regarding rapidly developing AI tools and virtually untested technology being deployed quickly and ubiquitously without an understanding of potential negative or dangerous impacts. Opponents of regulation argue that self-governance must occur from within the private sector, as any governmental oversight can suppress innovation and development.

While views range on the level of regulation that is necessary — or if any is warranted — most AI proponents agree on the need for some balance between the innovations of AI and fundamental human values. So, as policymakers work to balance privacy concerns with enabling innovation and enhancing services with AI, many questions remain.

The Department of Legislative Services profiles this issue in its annual compilation of Issue Papers:

The continued development of this and other generative AI software systems is drawing the attention of policymakers to better understand the technology, regulate it to protect individuals from potential risks, and to promote the development of safe applications of the technology.

 

Major Risks

Data Privacy Although data privacy has been a matter of concern since the advent of the Internet, the complexity of the algorithms that power AI has prompted interest in government regulation of the technology to prevent the improper or unethical use of personal data. However, regulation of this aspect of AI is sometimes challenging due to intellectual property claims and resistance by the private owners of these technologies to allow exploration of the internal workings of their systems.

Bias and Discrimination

As AI algorithms and neural networks are trained by humans, existing societal discriminations can be incorporated into the internal and inherent biases of the data sets that AI systems use and can affect the way the AI model functions. One set of AI functions that has been identified as potentially having some bias is the use of facial recognition software in security or policing contexts. In use by various law enforcement agencies throughout the nation, this software has been shown to be prone to error and unable to accurately recognize minorities, women, and young people. Similarly, some AI software designed to screen resumes for employment consideration has been found to be biased against minorities, women, and older individuals.

Government Initiatives

Federal Action

The National Artificial Intelligence Initiative Act of 2020 became law on January 1, 2021. The Act aims to promote US leadership in AI research and development to accelerate the nation’s economic prosperity and national security by developing and using trustworthy AI in the public and private sectors and preparing the workforce for the inevitable integration of AI systems.

In consultation with the National Institute of Standards and Technology, this multi-agency initiative has included work by the US Department of Energy to develop the AI Risk Management Playbook as a reference guide to support responsible and trustworthy AI use and development. Though not a binding document, the playbook addresses common AI risks and steps that AI leaders, practitioners, and procurement teams can take to manage data privacy and bias risks.

As previously reported on Conduit Street, in October 2023, President Biden issued an Executive Order to ensure America leads the way in seizing the promise and managing the risks of AI. The Executive Order establishes new standards for AI safety and security,  protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership worldwide, and more.

State Actions

In the 2022 legislative session, artificial intelligence bills and resolutions were introduced in at least 17 states and enacted in Colorado, Florida, Idaho, Maine, Maryland, Vermont, and Washington.

Maryland established the Industry 4.0 Technology Grant Program in the Department of Commerce to provide grants to certain small and medium-sized manufacturing enterprises to assist those manufacturers with implementing new industry 4.0 technology or related infrastructure. The definition of industry 4.0 includes AI.

The 2023 legislative session saw an uptick in state legislative action, with at least 25 states, Puerto Rico, and the District of Columbia introducing artificial intelligence bills and 14 states and Puerto Rico adopting resolutions or enacting legislation.

International Efforts

While the US federal and state governments debate and develop proposals for the oversight of AI, the European Union (EU) has led the charge in proposing a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence. In May 2023, the EU adopted a draft of the first rules implementing the Artificial Intelligence Act to ensure a human-centric and ethical development of AI by classifying AI risks into three categories: acceptable risk, high-risk, and unregulated.

What’s Next for Maryland?

At the 2023 MACo Winter Conference, Maryland’s Senior Advisor for Responsible Artificial Intelligence, Nishant Shah, detailed Maryland’s approach to AI to ensure the technology is responsible and productive. Shah also discussed options for governance, mitigating potential risks with proper guardrails, “sandbox” approaches, and continued collaboration and resources for local governments.

At the 2023 MACo Summer Conference, the Senate Chair of the Joint Committee on Cybersecurity, Information Technology, and Biotechnology, Senator Katie Fry Hester, discussed opportunities and challenges for AI applications across several policy sectors, including health, education, environment/climate, public safety, employment, and housing. Senator Hester also discussed several areas the General Assembly must consider with AI, including taking an inventory of existing AI applications across the state, government procurement, innovation, efficiency, and transparency.

Stay tuned to Conduit Street for more information.

Read more about Maryland’s budget and revenue outlook in the DLS Issue Papers.