How criminals used AI face apps to swindle users: A China case study exposes the risks

  • Individuals in Guangzhou were convicted for illegally selling citizens’ personal information using AI, allowing users to create fake facial recognition videos, resulting in prison sentences and fines.
  • Regarding Miaoya’s success, industry insiders speculated that the app might have earned over ¥100,000 in a single day.
  • Zhang Tianyi, Senior Product Manager at RuiLai Intelligent, a startup incubated by Tsinghua University’s AI Research Institute, emphasizes the importance of securing data in the era of AIGC with the widespread application of large models.

China’s first civil public interest lawsuit involving face recognition

Recently, the Guangzhou Internet Court released information about a case where AI had been used to turn photos into videos, in order to get around face-scanning security software. The so-called businesses, called “Check Head” and “Pass Face”, allowed users to upload photos, which the man, named only as Zheng, would then turn into moving videos of the person’s face to access their bank accounts and payment services, among other things.

Given the prevalence of payment platforms like WeChat and Alipay in China, which use facial recognition as a security measure, this would allow unscrupulous people to upload photos of their friends, colleagues or even strangers, to access banking accounts and payment services.

According to Zheng’s confession, he purchased personal photos corresponding to certain ID card numbers from unspecified sources through social platforms, offering them for around 15 to 20 yuan per photo. Three other culprits, named as Ren, Dai, and Chen, then purchased citizens’ personal information from Zheng’s group, paying varying amounts between 50 to 100 yuan per photo. They used artificial intelligence software to create fake dynamic facial recognition videos, capable of nodding, blinking, and other movements. These videos were used for unblocking accounts and verifying real-name authentication on certain apps, resulting in illegal profits.

The “Pass Face” service involves taking facial information and using synthesis software to create simulated dynamic videos of a person. For example, actions required in current facial verification processes, such as looking left or right, opening the mouth, or tilting the head, can be generated through synthesized videos. When entering the facial verification stage in apps or account verification, if the video’s facial clarity meets the required standards, the system will deem it a genuine human operation, successfully bypassing the facial verification stage and achieving the goal of hacking accounts.

According to the suspect’s admission, after bypassing facial recognition systems, criminals can access other people’s accounts on applications like WeChat, obtaining personal privacy and information such as chat records, payment records, and movement trajectories. After judicial review, the four individuals were found guilty of illegally handling over 2,000 pieces of personal information, with illegal gains exceeding $15000. They were sentenced to varying prison terms ranging from one year and two months to one year, and fines were imposed on each of them.

Also read: General AI apocalypse? Relax, it’s more hype than reality

Unabated popularity of AI face swapping applications

The case has driven huge debate in China about the exposure to and potential misuse of personal information in these facial recognition apps, which are numerous and incredibly popular.

Artificial intelligence face-swapping software primarily utilizes techniques like deepfakes. Marketed AI face-swapping applications such as Hugging Face, Reface, DeepArt, FaceShow, and DeepFaceLive are widely available. In China, AI face-swapping technologies faced criticism in the past due to privacy concerns.

Last August, the “Miaoya Camera,” an AI portrait app, suddenly gained popularity, flooding social media with users sharing AI-generated portraits.

-The user applies the photo in the lower right corner to convert into an AI photo-

The initial fee of $1.39 for creating a digital avatar, compared to the high costs of professional photography studios, seemed negligible. As more people joined, the app’s processing speed slowed down, with reports of over 2,000 people waiting in line for portrait generation on the second night of its launch.

Regarding Miaoya’s success, industry insiders speculated that the app might have earned over ¥100,000 each day. According to the Miaoya team, the AI model behind the app is named “Tiziano,” inspired by the portrait art master Tiziano Vecellio. Though the official details of the model’s technology were not disclosed, it is likely that Miaoya built upon open-source large models like SD, fine-tuning them for user customization.

To use Miaoya, users must upload a clear frontal photo and at least 20 additional photos with varied lighting, backgrounds, angles, and expressions. After the digital avatar is created, users can choose from over 30 templates for portraits, including vintage, forest-themed, business, and oil painting styles.

Initially, Miaoya’s terms allowed the unrestricted use of AI-generated content for various purposes, sparking public outcry. But the company has not given a clear answer on how the information will be used.The company later issued an apology and revised the terms, ensuring that photos are only used for digital avatar creation and automatically deleted after completion.The revised terms explicitly prohibit the illegal retention of identifiable information, user profiling based on input information, and the provision of user input information to third parties.

“It’s too early to say that AI portraits are going to replace offline photo booths.”

Wang Peng, associate researcher at the Beijing Academy of Social Sciences

Wang Peng, associate researcher at the Beijing Academy of Social Sciences, believes that while AI portraits have gained popularity in the AIGC era, forming a genuine business model still requires extensive effort. The demand for computational power in inference exceeds that in training, and computing power cost remains a bottleneck for AIGC applications.”It’s too early to say that AI portraits are going to replace offline photo booths,”he also said.

Compliance requirements for AI face swapping service providers

Zhang Tianyi, senior product manager at RuiLai Intelligent, a startup incubated by Tsinghua University’s AI Research Institute, emphasizes the importance of securing data in the era of AIGC with the widespread application of large models. “Misuse of AIGC models can lead to content compliance issues, including deceptive content generated by deepfakes and diffusion models, which can misguide users and have adverse social effects.”

Recent regulatory documents on AIGC governance have been rapidly issued from central to local authorities in China:

  • AI algorithm registration

As mentioned earlier, AI face-swapping software relies on deepfake technology, which falls under the category of providing AIGC services with public opinion or social mobilization capabilities within China. Providers must follow the algorithm registration procedures outlined in the “Regulations on the Management of Internet Information Service Algorithm Recommendation.

-cybersecurity-awareness-month-blog-

AIGC service providers handling sensitive personal information during the provision of services must conduct a PIA to assess the legal and compliance aspects of their data processing. The PIA should evaluate the legality, necessity, and appropriateness of personal information processing, potential impact on individual rights, security risks, and the effectiveness of protection measures. PIA reports and records of actions taken must be kept for at least three years.

PIA is a legal obligation under the Personal Information Protection Law. AI face-swapping service providers failing to fulfill this obligation may face corrective orders, warnings, confiscation of illegal gains, fines, and other penalties from relevant authorities.

  • Identification of AI-Generated content

Providers of AI face-swapping services, classified as providers of AIGC services, must identify generated content in compliance with the “Internet Information Service Deep Synthesis Management Regulations.” The National Information Security Standardization Technical Committee released guidelines in August on the identification of AIGC service content, requiring the inclusion of at least the service provider’s name in implicit watermarks on AI-generated images, audio, and video content.

At the end

AI face-swapping products, as a significant application of artificial intelligence, have brought a novel experience to a large number of users. However, their compliant development faces significant challenges. On one hand, AI face-swapping products can become breeding grounds for malicious activities, leading to concerns such as the recent “one-click undressing” incidents, facial information leaks causing infringement of portrait rights, copyright violations, and fraud risks. On the other hand, in addition to compliance obligations mentioned earlier, AI face-swapping products dealing with facial information struggle to obtain authorization for copyrights, portrait rights, and other rights of all entities involved.

For providers of AI face-swapping services, ensuring proper handling of personal information within the current legal framework, completing necessary regulatory requirements such as personal information safety impact assessments and registrations, and creating applications that align with mainstream values are formidable tasks. The road ahead is challenging, but the responsibility is substantial.

Coco-Yao

Coco Yao

Coco Yao was an intern reporter at BTW media covering artificial intelligence and media. She is studying broadcasting and hosting at the Communication School of Zhejiang.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *