The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

4 weeks ago 17

At the 2023 Defcon hacker league successful Las Vegas, salient AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees connected generative AI platforms and find weaknesses successful these captious systems. This “red-teaming” exercise, which besides had enactment from the US government, took a measurement successful opening these progressively influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic appraisal nonprofit Humane Intelligence is taking this exemplary 1 measurement further. On Wednesday, the radical announced a telephone for participation with the US National Institute of Standards and Technology, inviting immoderate US nonmigratory to enactment successful the qualifying circular of a nationwide red-teaming effort to measure AI bureau productivity software.

The qualifier volition instrumentality spot online and is unfastened to some developers and anyone successful the wide public arsenic portion of NIST's AI challenges, known arsenic Assessing Risks and Impacts of AI, oregon ARIA. Participants who walk done the qualifying circular volition instrumentality portion successful an in-person red-teaming lawsuit astatine the extremity of October astatine the Conference connected Applied Machine Learning successful Information Security (CAMLIS) successful Virginia. The extremity is to grow capabilities for conducting rigorous investigating of the security, resilience, and morals of generative AI technologies.

“The mean idiosyncratic utilizing 1 of these models doesn’t truly person the quality to find whether oregon not the exemplary is acceptable for purpose,” says Theo Skeadas, CEO of the AI governance and online information radical Tech Policy Consulting, which works with Humane Intelligence. “So we privation to democratize the quality to behaviour evaluations and marque definite everyone utilizing these models tin measure for themselves whether oregon not the exemplary is gathering their needs.”

The last lawsuit astatine CAMLIS volition divided the participants into a reddish squad trying to onslaught the AI systems and a bluish squad moving connected defense. Participants volition usage NIST's AI hazard absorption framework, known arsenic AI 600-1, arsenic a rubric for measuring whether the reddish squad is capable to nutrient outcomes that interruption the systems' expected behavior.

“NIST's ARIA is drafting connected structured idiosyncratic feedback to recognize real-world applications of AI models,” says Humane Intelligence laminitis Rumman Chowdhury, who is besides a contractor successful NIST's Office of Emerging Technologies and a subordinate of the US Department of Homeland Security AI information and information board. “The ARIA squad is mostly experts connected sociotechnical trial and evaluation, and [is] utilizing that inheritance arsenic a mode of evolving the tract toward rigorous technological valuation of generative AI.”

Chowdhury and Skeadas accidental the NIST concern is conscionable 1 of a bid of AI reddish squad collaborations that Humane Intelligence volition denote successful the coming weeks with US authorities agencies, planetary governments, and NGOs. The effort aims to marque it overmuch much communal for the companies and organizations that make what are present black-box algorithms to connection transparency and accountability done mechanisms similar “bias bounty challenges,” wherever individuals tin beryllium rewarded for uncovering problems and inequities successful AI models.

“The assemblage should beryllium broader than programmers,” Skeadas says. “Policymakers, journalists, civilian society, and nontechnical radical should each beryllium progressive successful the process of investigating and evaluating of these systems. And we request to marque definite that little represented groups similar individuals who talk number languages oregon are from nonmajority cultures and perspectives are capable to enactment successful this process.”

Read Entire Article