• As AI systems develop and increase in complexity, their risks and interconnectivity with other smart devices and systems will also increase, necessitating the creation of both specific governance mechanisms.
  • There exists a debate about whether AI should be regulated by the governance. Some agree because of some risks of AI while others think there will be risks of regulation.
  • AI governance will need to be adaptive and collaborative, lest they become unable to keep up with AI’s latest developments.

The way that social interactions and transactions are organised in today’s society is being drastically altered by artificial intelligence (AI). AI systems and the algorithms that underpin them are becoming more and more significant in the making of morally complex decisions for society. Examples of these systems include clinical decision support systems that diagnose patients, policing systems that forecast the possibility of criminal activity, and filtering algorithms that classify and offer users personalised content. Artificial intelligence (AI) differs from other technologies in that it can mimic or surpass human intelligence in complex problem-solving, as many cognitive tasks that humans typically perform can be replaced and outperformed by machines.

Why is AI governance important?

AI governance and regulation are important for understanding and controlling the level of risk presented by AI development and adoption. Eventually, it will also help to develop a consensus on the level of acceptable risk for the use of machine learning technologies in society and the enterprise.

However, governing the development of AI is very difficult because not only is there no centralised regulation or risk management framework for developers or adopters to refer to, but it is also challenging to assess risk when this changes depending on the context the system is used within.

Why AI needs to be regulated?

Ensuring the ethical use of AI

Regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users (and their data). AI companies should invest in strong cyber-security capabilities when dealing with data-heavy algorithms… and forego some revenues as user data should not be sold to third parties. This is a concept American companies seem to inherently and wilfully misunderstand without regulation.

More philosophically, regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. By having all actors disclose the source, purpose, and limitations of AIs’ outputs, we will be able to make better choices… and trust the choices of others. The fabric of society needs this.

Safeguarding human rights and safety

Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many.

Most will be human-related risks. Malicious actors can use Generative AI to spread misinformation or create deepfakes. This is very easy to do, and companies seem unable to put a stop to it themselves — mostly because they are unwilling (not unable) to tag AI-generated content. Our next elections may depend on regulations being put in place… while our teenage daughters may ask why we didn’t do it sooner.

All this is without even going into the topic of AI-driven warfare and autonomous weapons, the creation of which must be avoided at all costs. This scenario is however so catastrophic that we often use it to hide the many other problems with AI. Without strong AI regulations tackling the above, society may die a death of a thousand cuts rather than one singular weaponised blow. This is why we must ensure that companies agree to create systems that align with human values and morals.

Also read: Why does AI consume so much electricity?

Mitigating social and economic impact

Rules are needed to fairly compensate people whose data is used to train algorithms that will bring so much wealth to so few. Without this, we are only repeating the mistakes of the past, and making a deep economic chasm deeper. This is going to be difficult; there are few legal precedents to inform what is happening in the space today.

It will be important to continuously safeguard the world’s economies against AI-driven economic monopolies. Network effects mean that catching up to an internet giant is almost impossible today, for lack of data or compute. Anti-trust laws have been left rather untouched for decades, and they can no longer go on. Regulations will not make us less competitive in this case.

Why AI should not be regulated?

Stifling innovation and progress

The case could be made that regulations will slow down AI advancements and breakthroughs. That not allowing companies to test and learn will make them less competitive internationally. However, we are yet to see definitive proof that this is true. Even if it were, the question would remain: is unbridled innovation right for society as a whole? Profits are not everything. Maybe the EU will fall behind China and the US when it comes to creating new unicorns and billionaires. Is that so bad, as long as we still have social nets, free healthcare, parental leaves and 6 weeks of holidays a year? If having this, thanks to regulations, means a multi-millionaire cannot become a billionaire, so be it.

Also read: 5 best AI video generators

Complex and challenging implementation

Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. This is particularly true when accounting for the lack of clear standards in the field.

This makes the need to balance international standards and sovereignty a particularly touchy subject. AI operates across borders, and its regulation requires international cooperation and coordination. This can be complex, given varying legal frameworks and cultural differences.

Potential for overregulation and unintended consequences

Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. New challenges, risks and opportunities continuously emerge, and we need to remain agile/flexible enough to deal with them. Keeping up with the advancements and regulating cutting-edge technologies can be challenging for governing bodies… but that has never stopped anyone, and the world still stands.

Moving forward, governments will need to collaborate and cooperate to establish broad frameworks while promoting and encouraging knowledge sharing and interdisciplinary collaboration. These frameworks will have to be cooperative and flexible in order to stay up to date with the most recent advancements in AI.