Anthropic Wants Its AI Agent to Control Your Computer

2 hours ago 2

Demos of AI agents tin look stunning but getting the exertion to execute reliably and without annoying, oregon costly, errors successful existent beingness tin beryllium a challenge. Current models tin reply questions and converse with astir human-like accomplishment and are the backbone of chatbots specified arsenic OpenAI’s ChatGPT and Google’s Gemini. They tin besides execute tasks connected computers erstwhile fixed a elemental bid by accessing the machine surface arsenic good arsenic input devices similar a keyboard and trackpad oregon done low-level bundle interfaces.

Anthropic says that Claude outperforms different AI agents connected respective cardinal benchmarks including SWE-bench, which measures an agent's bundle improvement skills and OSWorld, which gauges an agent's capableness to usage a machine operating system. The claims person yet to beryllium independently verified. Anthropic says Claude performs tasks successful OSWorld correctly 14.9 percent of the time. This is good beneath humans, who mostly people astir 75 percent, but considerably higher than the existent champion agents, including OpenAI’s GPT-4, which win astir 7.7 percent of the time.

Anthropic claims that respective companies are already investigating the agentic mentation of Claude. This includes Canva, which is utilizing it to automate plan and editing tasks and Replit, which uses the exemplary for coding chores. Other aboriginal users see The Browser Company, Asana and Notion.

Ofir Press, a postdoctoral Princeton University who helped make SWE-bench, says that agentic AI tends to deficiency the quality to program acold up and often conflict to retrieve from errors. “In bid to amusement them to beryllium utile we indispensable get beardown show connected pugnacious and realistic benchmarks,” helium says, similar reliably readying a wide scope of trips for a idiosyncratic and booking each the indispensable tickets.

Kaplan notes that Claude tin already troubleshoot immoderate errors amazingly well. When faced with a terminal mistake erstwhile trying to commencement a web server, for instance, the exemplary knew however to revise its bid to hole it. It besides worked retired that it had to alteration popups erstwhile it ran into a dormant extremity browsing the web.

Many tech companies are present racing to make AI agents arsenic they pursuit marketplace stock and prominence. In fact, it whitethorn not beryllium agelong earlier galore users person agents astatine their fingertips. Microsoft, which has poured upwards of $13 cardinal into OpenAI, says it is testing agents that tin usage Windows computers. Amazon, which has invested heavy successful Anthropic, is exploring however agents could urge and yet bargain goods for its customers.

Sonya Huang, a spouse astatine the task steadfast Sequoia who focuses connected AI companies, says for each the excitement astir AI agents, astir companies are truly conscionable rebranding AI-powered tools. Speaking to WIRED up of the Anthropic news, she says that the exertion works champion presently erstwhile applied successful constrictive domains specified arsenic coding-related work. “You request to take occupation spaces wherever if the exemplary fails, that's okay,” she says. “Those are the occupation spaces wherever genuinely cause autochthonal companies volition arise.”

A cardinal situation with agentic AI is that errors tin beryllium acold much problematic than a garble chatbot reply. Anthropic has imposed definite constraints connected what Claude tin do, for illustration limiting its quality to usage a person’s recognition paper to bargain stuff.

If errors tin beryllium avoided good enough, says Press of Princeton University, users whitethorn larn to spot AI—and computers—in a wholly caller way. “I'm ace excited astir this caller era,” helium says.

Read Entire Article