Trends
Discussion on whether AI be regulated by the governance
The way that social interactions and transactions are organised in today’s society is being drastically altered by artificial intelligence (AI). AI systems and the algorithms that underpin them are becoming more and more significant in the making of morally complex decisions for society. Examples of…

Headline
The way that social interactions and transactions are organised in today’s society is being drastically altered by artificial intelligence (AI). AI systems and the algorithms that underpin them are becoming more and more significant in the making of morally complex decisions for…
Context
The way that social interactions and transactions are organised in today’s society is being drastically altered by artificial intelligence (AI). AI systems and the algorithms that underpin them are becoming more and more significant in the making of morally complex decisions for society. Examples of these systems include clinical decision support systems that diagnose patients, policing systems that forecast the possibility of criminal activity, and filtering algorithms that classify and offer users personalised content. Artificial intelligence (AI) differs from other technologies in that it can mimic or surpass human intelligence in complex problem-solving, as many cognitive tasks that humans typically perform can be replaced and outperformed by machines. AI governance and regulation are important for understanding and controlling the level of risk presented by AI development and adoption. Eventually, it will also help to develop a consensus on the level of acceptable risk for the use of machine learning technologies in society and the enterprise.
Evidence
Pending intelligence enrichment.
Analysis
However, governing the development of AI is very difficult because not only is there no centralised regulation or risk management framework for developers or adopters to refer to, but it is also challenging to assess risk when this changes depending on the context the system is used within. Regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users (and their data). AI companies should invest in strong cyber-security capabilities when dealing with data-heavy algorithms… and forego some revenues as user data should not be sold to third parties. This is a concept American companies seem to inherently and wilfully misunderstand without regulation. More philosophically, regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. By having all actors disclose the source, purpose, and limitations of AIs’ outputs, we will be able to make better choices… and trust the choices of others. The fabric of society needs this. Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many.
Key Points
- As AI systems develop and increase in complexity, their risks and interconnectivity with other smart devices and systems will also increase, necessitating the creation of both specific governance mechanisms.
- There exists a debate about whether AI should be regulated by the governance. Some agree because of some risks of AI while others think there will be risks of regulation.
- AI governance will need to be adaptive and collaborative, lest they become unable to keep up with AI’s latest developments.
Actions
Pending intelligence enrichment.





