A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

1 month ago 26

Perplexity did not respond to requests for comment.

In a connection emailed to WIRED, News Corp main enforcement Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies similar OpenAI, which understands that integrity and creativity are indispensable if we are to recognize the imaginable of Artificial Intelligence,” the connection says. “Perplexity is not the lone AI institution abusing intelligence spot and it is not the lone AI institution that we volition prosecute with vigor and rigor. We person made wide that we would alternatively woo than sue, but, for the involvement of our journalists, our writers and our company, we indispensable situation the contented kleptocracy.”

OpenAI is facing its ain accusations of trademark dilution, though. In the New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat volition property made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its estimation done trademark dilution. In 1 illustration cited successful the lawsuit, the Times alleges that Bing Chat claimed that the Times called reddish vino (in moderation) a “heart-healthy” food, erstwhile successful information it did not; the Times argues that its actual reporting has debunked claims astir the healthfulness of mean drinking.

"Copying quality articles to run substitutive, commercialized generative AI products is unlawful, arsenic we made wide successful our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT manager of outer communications Charlie Stadtlander. “We applaud this suit from Dow Jones and the New York Post, which is an important measurement toward ensuring that steadfast contented is protected from this benignant of misappropriation."

If publishers prevail successful arguing that hallucinations tin interruption trademark law, AI companies could look “immense difficulties” according to Matthew Sag, a prof of instrumentality and artificial quality astatine Emory University.

“It is perfectly intolerable to warrant that a connection exemplary volition not hallucinate,” Sag says. In his view, the mode connection models run by predicting words that dependable close successful effect to prompts is ever a benignant of hallucination—sometimes it’s conscionable much plausible-sounding than others.

“We lone telephone it a hallucination if it doesn't lucifer up with our reality, but the process is precisely the aforesaid whether we similar the output oregon not.”

Read Entire Article