OpenAI claims The New York Times “hacked” ChatGPT to create misleading evidence
OpenAI has asked a federal judge to dismiss parts of the New York Times copyright lawsuit against it, arguing that the newspaper “hacked” its chatbot ChatGPIT and other artificial intelligence systems to generate misleading evidence for the case. “Have done.
OpenAI said in a filing in Manhattan federal court on Monday that the Times used technology “to reproduce its content through misleading signals that clearly violate OpenAI’s terms of use.”
“The allegations made in the Times’ complaint do not meet its renowned rigorous journalistic standards,” OpenAI said. “The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products.”
Representatives for The New York Times and OpenAI did not immediately respond to requests for comment on the filing.
The Times sued OpenAI and its biggest financial backer Microsoft in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users.
The Times is one of several copyright owners who have sued tech companies over alleged misuse of their work in AI training, including groups of authors, visual artists and music publishers.
Tech companies have said their AI systems make fair use of copyrighted material and the lawsuits threaten the growth of a potentially multitrillion-dollar industry.
The complaint accused OpenAI and Microsoft of “taking advantage of the Times’s heavy investment in journalism” and trying to create an alternative to the newspaper. It cited several examples in which OpenAI and Microsoft chatbots gave users almost verbatim excerpts from its articles when prompted.
OpenAI said in its filing that the Times “took thousands of attempts to produce a highly unusual result.”
OpenAI said, “In general, no one can use ChatGPT to present Times articles as they wish.”
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)