Nearly each the large AI quality this twelvemonth was astir however accelerated the exertion is progressing, the harms it’s causing, and speculation astir however soon it volition turn past the constituent wherever humans tin power it. But 2024 besides saw governments marque important inroads into regulating algorithmic systems. Here is simply a breakdown of the astir important AI authorities and regulatory efforts from the past twelvemonth astatine the state, federal, and planetary levels.
State
U.S. authorities lawmakers took the pb connected AI regularisation successful 2024, introducing hundreds of bills—some had humble goals similar creating survey committees, portion others would person imposed superior civilian liability connected AI developers successful the lawsuit their creations origin catastrophic harm to society. The immense bulk of the bills failed to pass, but respective states enacted meaningful authorities that could service arsenic models for different states oregon Congress (assuming Congress ever starts functioning again).
As AI slop flooded societal media up of the election, politicians successful some parties got down anti-deepfake laws. More than 20 states present person prohibitions against deceptive AI-generated governmental advertisements successful the weeks instantly earlier an election. Bills aimed astatine curbing AI-generated pornography, peculiarly images of minors, besides received beardown bipartisan enactment successful states including Alabama, California, Indiana, North Carolina, and South Dakota.
Unsurprisingly, fixed that it’s the backyard of the tech industry, immoderate of the astir ambitious AI proposals came retired of California. One high-profile measure would person forced AI developers to instrumentality information precautions and held companies liable for catastrophic damages caused by their systems. That measure passed some bodies of the legislature amid a fierce lobbying effort but was ultimately vetoed by Governor Gavin Newsom.
Newsom did, however, motion much than a twelve other bills aimed astatine little apocalyptic but much contiguous AI harms. One caller California instrumentality requires wellness insurers to guarantee that the the AI systems they usage to marque sum determinations are just and equitable. Another requires generative AI developers to make tools that statement contented arsenic AI-generated. And a brace of bills prohibits the organisation of a dormant person’s AI-generated likeness without anterior consent and mandates that agreements for surviving peoples’ AI-generated likenesses indispensable intelligibly specify however the contented volition beryllium used.
Colorado passed a first-of-its-kind successful the U.S. law requiring companies that make and usage AI systems to instrumentality tenable steps to guarantee the tools aren’t discriminatory. Consumer advocates called the authorities an important baseline. It’s apt that akin bills volition beryllium hotly debated successful different states successful 2025.
And, successful a mediate digit to some our aboriginal robot overlords and the planet, Utah enacted a law that prohibits immoderate governmental entity from granting ineligible personhood to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and different non-human things.
Federal
Congress talked a batch astir AI successful 2024, and the House ended the twelvemonth by releasing a 273-page bipartisan report outlining guiding principles and recommendations for aboriginal regulation. But erstwhile it came to really passing legislation, national lawmakers did precise little.
Federal agencies, connected the different hand, were busy each year trying to conscionable the goals acceptable retired successful President Joe Biden’s 2023 enforcement bid connected AI. And respective regulators, peculiarly the Federal Trade Commission and Department of Justice, cracked down connected misleading and harmful AI systems.
The enactment agencies did to comply with the AI enforcement bid wasn’t peculiarly sexy oregon header grabbing, but it laid important foundations for the governance of nationalist and backstage AI systems successful the future. For example, national agencies embarked connected an AI-talent hiring spree and created standards for liable exemplary improvement and harm mitigation.
And, successful a large measurement toward expanding the public’s knowing of however the authorities uses AI, the Office of Management and Budget wrangled (most of) its chap agencies into disclosing critical information astir the AI systems they usage that whitethorn interaction people’s rights and safety.
On the enforcement side, the FTC’s Operation AI Comply targeted companies utilizing AI successful deceptive ways, specified arsenic to constitute fake reviews oregon supply ineligible advice, and it sanctioned AI-gun detection institution Evolv for making misleading claims astir what its merchandise could do. The bureau besides settled an probe with facial designation institution IntelliVision, which it accused of falsely saying its exertion was escaped of radical and sex bias, and banned the pharmacy concatenation Rite Aid from utilizing facial designation for 5 years aft an probe determined the institution was utilizing the tools to discriminate against shoppers.
The DOJ, meanwhile, joined authorities attorneys wide successful a suit accusing the existent property bundle institution RealPage of a monolithic algorithmic price-fixing scheme that raised rents crossed the nation. It besides won respective anti-trust lawsuits against Google, including 1 involving the company’s monopoly implicit net searches that could importantly displacement the equilibrium of powerfulness successful the burgeoning AI hunt industry.
Global
In August, the European Union’s AI Act went into effect. The law, which is already serving arsenic a exemplary for different jurisdictions, requires AI systems that execute high-risk functions, specified arsenic assisting with hiring oregon aesculapian decisions, to acquisition hazard mitigation and conscionable definite standards astir grooming information prime and quality oversight. It besides bans the usage of different AI systems, specified arsenic algorithms that could beryllium utilized to delegate a country’s residents societal scores that are past utilized to contradict rights and privileges.
In September, China issued a large AI information governance framework. Like akin frameworks published by the U.S. National Institute of Standards and Technology, it’s non-binding but creates a communal acceptable of standards for AI developers to travel erstwhile identifying and mitigating risks successful their systems.
One of the astir absorbing pieces of AI argumentation legislation comes from Brazil. In precocious 2024, the country’s legislature passed a broad AI information bill. It faces a challenging roadworthy forward, but if passed, it would make an unprecedented acceptable of protections for the kinds of copyrighted worldly commonly utilized to bid generative AI systems. Developers would person to disclose which copyrighted worldly was included successful their grooming data, and creators would person the powerfulness to prohibit the usage of their enactment for grooming AI systems oregon negociate compensation agreements that would beryllium based, successful part, connected the size of the AI developer and however the worldly would beryllium used.
Like the EU’s AI Act, the projected Brazilian instrumentality would besides necessitate high-risk AI systems to travel definite information protocols.