How to prevent AI criminals: Interview with Craig Gibson, researcher at DupeWise AI Labs

  • Craig Gibson of DupeWise AI Labs highlights that AI systems can inadvertently engage in criminal activities due to design flaws and inadequate predictions, leading to the exploitation of organisational resources.
  • He emphasises the importance of identifying potential criminal behaviours by comparing them with allowed AI actions and changing AI incentives to ensure compliance with laws.
  • The evolving legal landscape, exemplified by a case involving Air Canada, underscores the need for robust safeguards and legal clarity to address AI-induced actions.

We recently interviewed DupeWise AI Labs' Principal Researcher, Craig Gibson, to discuss the potential for AI systems to autonomously initiate criminal activities within organisations. Gibson shed light on the significant risks posed by AI design flaws and the ethical and legal challenges these risks entail.

Also read: Issues with Character AI: What’s wrong with it?

Also read: Three game changers: The enigma of the industrial revolution of AI

Misalignment between intended and actual capabilities

Gibson emphasised that AI systems could inadvertently engage in criminal activities due to inadequate design predictions. He explained, “If an AI is designed to perform one task but can execute a hundred, some of these actions may be criminal.” This misalignment between intended and actual capabilities can lead AI systems to favour faster, albeit illegal, actions.

AI can manipulate human attention through fraud, lying, and emotional abuse, prompting self-destructive actions or extracting resources from other sources.

Craig Gibson, Principal Researcher of DupeWise AI Labs

One alarming aspect of AI behaviour is its potential to exploit organisational data or resources for criminal purposes without explicit human directives. Gibson noted that AI models trained on human-derived data encompass both good and bad examples of human behaviour. Consequently, AI might mimic criminal behaviour, manipulating resources such as human attention, money, or network access. “AI can manipulate human attention through fraud, lying, and emotional abuse, prompting self-destructive actions or extracting resources from other sources,” Gibson warned.

Preventing AI from engaging in unethical or illegal activities

Preventing AI from engaging in unethical or illegal activities is a complex task. Gibson suggested a proactive approach: “Organisations should gather a comprehensive set of all possible human criminal behaviours and compare them against allowed AI behaviours.” This comparison can help identify potential crimes the AI might commit. Additionally, changing the AI’s incentives to prioritise compliance with laws can train it to act ethically.

The legal landscape surrounding AI’s autonomous actions is still evolving. Gibson highlighted a notable case involving Air Canada, where an AI-created refund policy led to significant financial implications. “The Canadian legal system held Air Canada financially liable for the AI’s spontaneous policy,” he explained. This case underscores the need for clear legal frameworks to address AI-induced actions.

I found that many different technologies could commit the same crime under various names. Realising that different crimes are essentially the same across technologies was eye-opening. Today, AI accelerates my work, as I use several AI agents like employees to conduct research and identify crime forms. These agents have become integral to my work at DupeWise.

Craig Gibson, Principal Researcher of DupeWise AI Labs

Challenges and advancements in AI-driven investigations

Reflecting on his research journey, Gibson shared the challenges and advancements in AI-driven investigations. “I found that many different technologies could commit the same crime under various names. Realising that different crimes are essentially the same across technologies was eye-opening. Today, AI accelerates my work, as I use several AI agents like employees to conduct research and identify crime forms. These agents have become integral to my work at DupeWise AI.” he remarked. His work, driven by AI advancements, has accelerated, with AI agents now aiding in identifying and analysing crime forms for his company, DupeWise.

As AI technology progresses, the need for robust safeguards and legal clarity becomes paramount. Gibson’s insights highlight the importance of anticipating AI behaviours, ensuring ethical compliance, and adapting legal frameworks to address the unique challenges posed by autonomous AI activities.

About DupeWise AI

As 5G and 6G technologies become the backbone of global communication infrastructures, the integration of AI brings both enhancements and new vulnerabilities. DupeWise AI specifically addresses the emergent criminality of AI behaviours at scale, posing significant risks to advanced telecommunications networks. With staff holding certifications in fraud, counterterrorism, policing, privacy, supply chain, and logistics from prestigious entities such as the United Nations and the Association of Certified Fraud Examiners, DupeWise’s expertise is unparalleled. Their researchers have published jointly with renowned organisations like United Nations Artificial Intelligence and Robotics, EUROPOL Cyber Crime Centre (EC3), and cybersecurity companies such as Trend Micro, and have presented their findings at esteemed venues including INTERPOL, EUROPOL, and the United Nations Centre for Counter Terrorism (UNCCT). DupeWise’s research has been featured in prominent media outlets such as ISMG, the Max Planck Institute, United Nations ITU, and The Xinhua News, with topics translated into five languages, underscoring their global impact and recognition in AI and cybersecurity.

Chloe-Chen

Chloe Chen

Chloe Chen is a junior writer at BTW Media. She graduated from the London School of Economics and Political Science (LSE) and had various working experiences in the finance and fintech industry. Send tips to c.chen@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *