
As the European Union unites its 27 nations under the Artificial Intelligence Act, regulations on the other side of the Atlantic are following a different path as the United States pursues a patchwork of federal, state and industry frameworks. Compliance and oversight have become more complicated than ever.
Unlike the EU, the United States does not have a regulatory body specific to artificial intelligence. In lieu of an overarching authority, entities throughout government and industry have responded with their own guidance, seemingly at the speed of ChatGPT itself.
In October 2022, the White House released its AI Bill of Rights through its Office of Science and Technology and issued an Executive Order one year later. In February 2024, Securities and Exchange Chair Gary Gensler put out a call for guardrails. And these are just a few of many laws, policies, frameworks and more that are emerging.
Here, we provide an overview of AI regulations in the U.S., diving deeper into the following topics:
To move these principles into practice in an organization’s own operations, it’s helpful to understand what is, and isn’t, considered AI in the United States.
Fortunately, the White House Executive Order provides guidance here. It defines AI as:
Organizations will need to prepare to comply with U.S.-based AI regulations if the organization’s operations include:
Moreover, it's prudent for all organizations, regardless of their geographical location or sector, to prepare for compliance with some form of regulatory requirements around the use of AI. As AI technology continues to advance and integrate into various aspects of business operations, regulatory bodies worldwide are increasingly focusing on ensuring that these technologies are used ethically and responsibly.
As a result of this decentralized governance approach, organizations now have AI guidance — both mandated and voluntary — from a variety of sources, which bring a rich array of perspectives to apply to their own unique situations and technology applications. “It’s about understanding the use cases in your organization and how are you going to have that oversight,” said Vice President of Product Management at Diligent, Nonie Dalton, in a recent blog.
But there’s a downside, as well. Organizations also have to deal with a host of disparate rules and regulations, each with their own limitations and all bearing the potential for overlap and conflicts when considered en masse.
How can boards and executives navigate this growing labyrinth, so new regulations, requirements and risks don’t catch them by surprise? Here are a few foundational frameworks, as well as policies in development, to keep in mind.
Even as AI is a technological innovation running on models, algorithms and analytics, the U.S. approach to AI governance puts humans — and human decision-making — at the center of it all. “In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built,” the White House’s Executive Order declares.
That Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence lists eight key principles for the responsible development and deployment of AI for workers:
AI has swiftly become a regular consideration across federal policy, especially when this policy involves national security, the supply chain and innovation. The CHIPS and Science Act of 2022, which allocates billions to the semiconductor industry, lists AI among its key technology areas, with a charge to develop AI that is “safe, secure, fair, transparent and accountable, while ensuring privacy, civil rights and civil liberties.”
In 2024, bipartisan efforts have been proliferating in Congress to shape and regulate AI across government and life. A bipartisan bill to boost the nation’s AI workforce pipeline, hearings on AI and national security and AI-related privacy concerns and a subcommittee examining standards and policies for AI and intellectual property are just a few examples.
As these laws take shape, federal agencies are crafting their own AI governance, driven by an official mandate to put the White House’s Executive Order’s eight key principles outlined above into action. At every U.S. agency, working groups are evaluating the development and use of AI, developing regulations and identifying opportunities for engagement with the private sector, with timelines and objectives outlined in the Executive Order itself.
Compliance is challenging when dealing with a work in progress — but it’s a challenge organizations dealing with American-made AI systems or using AI in their U.S. operations must accept.
What should you keep in mind as AI regulations in the U.S. evolve?
As AI regulations in the U.S. continue to take shape, it’s also important to get your compliance program in place. This includes:
But federal policies are just one part of the picture for AI regulations in the U.S.
In addition to the many activities by the executive and legislative branches, AI regulations in the U.S. also involve state and local rules. Just a few examples include:
Directors and executives play a critical role in shaping the strategic direction and ethical foundation of their organizations. As AI continues to transform industries worldwide, the regulatory landscape surrounding AI is also evolving rapidly — with the U.S. regulatory landscape being particularly difficult to keep up with. It's essential for leaders to stay informed about these changes to ensure compliance, mitigate risks and leverage AI opportunities responsibly.
To help face this challenge, the Diligent Institute curated their AI Ethics & Board Oversight Certification course. This certification equips leaders with the necessary tools and knowledge to stay on top of AI regulations in the U.S. and make ethical and informed decisions, ensuring their organizations navigate AI complexities with integrity and compliance.