- OpenAI believes that the New York Times hacked ChatGPT and other AI systems to provide misleading evidence
- New York Times has accused OpenAI of copyright infringement in its training of robots
Technology companies and several copyright holders, including musicians and publishers, have been debating whether AI training is breaking copyright law. The New York Times sued OpenAI and Microsoft, accusing them of using millions of articles without permission in robot training. OpenAI argued that the New York Times “hacked” its ChatGPT and other AI systems to provide misleading evidence for lawsuits
A Debate
Claiming that the New York Times had flagrantly violated OpenAI’s terms of use through ‘deceptive prompts’ that caused the technology to copy its material, OpenAI asked a federal judge to dismiss some of The Times’ content and file a new lawsuit against it. OpenAI believes that the New York Times’ allegations are not true, but rather, the New York Times hired someone to hack OpenAI’s products.
Who is right
The New York Times said in a statement that, in response to OpenAI’s false claims, they were simply looking for evidence of stealing and copying copyrighted works. Since December of last year, The New York Times has sued OpenAI and Microsoft, accusing them of using millions of articles without permission to train chatbots to provide information to users. Such arguments exist with many other copyright owners, but tech companies do not acknowledge that there is infringement in the training of robots.
The court has yet to make a definitive ruling.
Also Read:France’s Mistral launches Le Chat to challenge ChatGPT






