Microsoft is calling connected members of Congress to modulate the usage of AI-generated deepfakes to support against fraud, abuse, and manipulation. Microsoft vice seat and president Brad Smith is calling for urgent enactment from policymakers to support elections and defender seniors from fraud and children from abuse.
“While the tech assemblage and non-profit groups person taken caller steps to code this problem, it has go evident that our laws volition besides request to germinate to combat deepfake fraud,” says Smith successful a blog post. “One of the astir important things the US tin bash is walk a broad deepfake fraud statute to forestall cybercriminals from utilizing this exertion to bargain from mundane Americans.”
Microsoft wants a “deepfake fraud statute” that volition springiness instrumentality enforcement officials a ineligible model to prosecute AI-generated scams and fraud. Smith is besides calling connected lawmakers to “ensure that our national and authorities laws connected kid intersexual exploitation and maltreatment and non-consensual intimate imagery are updated to see AI-generated content.”
The Senate recently passed a bill cracking down connected sexually explicit deepfakes, allowing victims of nonconsensual sexually explicit AI deepfakes to writer their creators for damages. The measure was passed months aft mediate and precocious schoolhouse students were recovered to beryllium fabricating explicit images of pistillate classmates, and trolls flooded X with graphic Taylor Swift AI-generated fakes.
Microsoft has had to instrumentality much information controls for its ain AI products, aft a loophole successful the company’s Designer AI representation creator allowed radical to make explicit images of celebrities similar Taylor Swift. “The backstage assemblage has a work to innovate and instrumentality safeguards that forestall the misuse of AI,” says Smith.
While the FCC has already banned robocalls with AI-generated voices, generative AI makes it casual to make fake audio, images, and video — thing we’re already seeing during the tally up to the 2024 statesmanlike election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris connected X earlier this week, successful a station that appears to interruption X’s ain policies against synthetic and manipulated media.
Microsoft wants posts similar Musk’s to beryllium intelligibly labeled arsenic a deepfake. “Congress should necessitate AI strategy providers to usage state-of-the-art provenance tooling to statement synthetic content,” says Smith. “This is indispensable to physique spot successful the accusation ecosystem and volition assistance the nationalist amended recognize whether contented is AI-generated oregon manipulated.”