President Biden Issues Executive Order to Establish AI Safeguards

This week, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI).

AI is poised to transform the world and how we live, work, and communicate. AI has the potential to create significant opportunities for government transformation, enabling greater efficiency while improving the accuracy and effectiveness of decision-making and service delivery.

For county governments, AI can mean streamlined processes, enhanced services, improved public safety and security, fraud detection and prevention, data analytics, and regulatory compliance. Perhaps most importantly, this rapidly evolving technology can revolutionize resident engagement and input.

However, implementing and regulating AI comes with challenges, including concerns about privacy, security, oversight, equity, accessibility, bias, et cetera. So, as local governments explore AI’s transformative power, many questions remain.

According to the Biden Administration, the Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership worldwide, and more.

“The actions that President Biden directed today are vital steps forward in the US’s approach to safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The White House breaks the critical components of the executive order into eight parts:

  • Creating new safety and security standards for AI, including requiring some AI companies to share safety test results with the federal government, directing the Commerce Department to develop guidance for AI watermarking, and creating a cybersecurity program that can make AI tools that help identify flaws in critical software.
  • Protecting consumer privacy, including by creating guidelines that agencies can use to evaluate privacy techniques used in AI.
  • Advancing equity and civil rights by guiding landlords and federal contractors to help avoid AI algorithms furthering discrimination and creating best practices on the appropriate role of AI in the justice system, including when it’s used in sentencing, risk assessments, and crime forecasting.
  • Protecting consumers overall by directing the Department of Health and Human Services to create a program to evaluate potentially harmful AI-related healthcare practices and develop resources on how educators can responsibly use AI tools.
  • Supporting workers by producing a report on the potential labor market implications of AI and studying the ways the federal government could help workers affected by a disruption to the labor market.
  • Promoting innovation and competition by expanding grants for AI research in areas such as climate change and modernizing the criteria for highly skilled immigrant workers with crucial expertise to stay in the US.
  • Working with international partners to implement AI standards around the world.
  • Developing guidance for federal agencies’ use and procurement of AI and speeding up the government’s hiring of workers skilled in the field.

Read the Executive Order fact sheet for more information.

Interested in learning more about the responsible deployment of AI? Don’t miss the MACo Winter Conference session, “Let’s Talk AI – The Good, the Bad, the Ugly,” on December 7, 2023.

Learn more about MACo’s Winter Conference: