California is known for taking connected regulatory issues similar data privacy and social media contented moderation, and its latest people is AI. The state’s legislature precocious passed SB 1047, 1 of the US’s archetypal and astir important frameworks for governing artificial quality systems. The measure contains sweeping AI information requirements aimed astatine the perchance existential risks of “foundation” AI models trained connected immense swaths of human-made and synthetic data.
SB 1047 has proven controversial, drafting disapproval from the likes of Mozilla (which expressed concern it would harm the open-source community); OpenAI (which warned it could hamper the AI industry’s growth); and Rep. Nancy Pelosi (D-CA), who called it “well-intentioned but sick informed.” But peculiarly aft an amendment that softened immoderate provisions, it garnered enactment from different parties. Anthropic concluded that the bill’s “benefits apt outweigh its costs,” portion erstwhile Google AI pb Geoffrey Hinton called it “a sensible approach” for balancing risks and advancement of the technology.
Governor Gavin Newsom hasn’t indicated whether helium volition motion SB 1047, truthful the bill’s aboriginal is hazy. But the biggest instauration exemplary companies are based successful California, and its transition would impact them all.
SB 1047 has passed the California Senate.
The Senate was wide expected to walk the bill, which has present officially cleared each hurdle but a last signature from Governor Gavin Newsom. Newsom has until the extremity of September to marque his call.
California legislature passes sweeping AI information bill
The California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), 1 of the archetypal important regulations of artificial quality successful the US.
The bill, which has been a flashpoint for statement successful Silicon Valley and beyond, would obligate AI companies operating successful California to instrumentality a fig of precautions earlier they bid a blase instauration model. Those see making it imaginable to rapidly and afloat unopen the exemplary down, ensuring the exemplary is protected against “unsafe post-training modifications,” and maintaining a investigating process to measure whether a exemplary oregon its derivatives is particularly astatine hazard of “causing oregon enabling a captious harm.”
California legislator files measure prohibiting agencies from moving with unethical AI companies
A 2nd California authorities legislator has introduced bills meant to modulate AI systems, peculiarly those utilized by authorities agencies.
Senator Steve Padilla, a Democrat, introduced Senate Bills 892 and 893, establishing a nationalist AI assets and creating a “safe and ethical framework” astir AI for the state. Senate Bill 892 volition necessitate California’s Department of Technology to make safety, privacy, and non-discrimination standards astir services utilizing AI. It besides prohibits the authorities of California from contracting immoderate AI services “unless the supplier of the services meets the established standards.”
California lawmaker proposes regularisation of AI models
A California lawmaker volition record a measure seeking to marque generative AI models much transparent and commencement a treatment successful the authorities connected however to modulate the technology.
Time reports that California Senator Scott Wiener (D) has drafted a measure requiring “frontier” exemplary systems, usually classified arsenic ample connection models, to conscionable transparency standards erstwhile they scope supra a definite quantity of computing power. Wiener’s measure volition besides suggest information measures truthful AI systems don’t “fall into the hands of overseas states” and tries to found a authorities probe halfway connected AI extracurricular of Big Tech.