‘Forgetting’ techniques in AI impacts efficiency

  • New research has found that popular “forgetting” techniques that make AI models forget specific bad data can also significantly degrade the performance of the models, sometimes making them unusable.
  • The researchers note that there is currently no effective “forgetting” method that allows models to forget specific data without significant loss of model utility.

OUR TAKE
“Forgetting” techniques for AI models are designed to make AI models forget specific and unwanted information learned in training data, such as sensitive private data or copyrighted material. The study finds that current “forgetting” techniques can impair their ability to answer basic questions, leading to a decrease in model performance. The researchers conclude that there is currently no effective “forgetting” method that allows models to forget specific data without significant loss of model utility.

-Rae Li, BTW reporter

What happened

“Forgetting” techniques in AI models allow the model to forget specific and unwanted information learnt from training data, such as sensitive private data or copyrighted material. The researchers find that while these techniques can make models forget certain information, they can also impair model performance, especially in answering basic questions. The study uses a benchmark test called MUSE to evaluate the “forgetting” effect of different algorithms, and finds that while these techniques made the models forget specific information, they also reduce the overall usefulness of the models.

The study, conducted by researchers at the University of Washington, Princeton University, the University of Chicago, the University of Southern California, and Google, tests eight different publicly available algorithms and finds that these “forgetting” techniques make the models forget specific data, such as the Harry Potter books, while at the same time affecting the models’ acquisition of relevant knowledge. The researchers note that designing effective forgetting methods is a challenge because knowledge is intertwined in the model. For example, a model may have learnt both the Harry Potter books and the free content on the Harry Potter wiki, and when attempting to remove the Harry Potter books, this will also influence on the model’s knowledge of the Harry Potter wiki. The researchers believe that a solution to this problem has not yet been found and further research is needed.

Also read: What are server rooms used for and their main suppliers?

Also read: HP’s breakthrough in accessible AI solutions

Why it’s important 

With the widespread adoption of AI technology, it has become critical to know how to handle and manage sensitive information in training data. “Forgetting” techniques are designed to allow AI models to remove or forget specific information from their training data, which is important in terms of protecting privacy, complying with copyright laws, and responding to data deletion requests.

However, the study suggests that existing “forgetting” techniques, while enabling AI models to forget unwanted information, may impair model performance, particularly in their ability to answer basic questions. This suggests that while “forgetting” techniques are theoretically appealing, they may have unintended consequences that affect the reliability and usefulness of AI models. The importance of this message is therefore that it highlights the need for more in-depth research and innovation in the development and implementation of “forgetting” technologies to ensure that AI systems remain efficient and accurate.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *