A layered model for AI governance

  • Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
  • AI governance is the legal framework for ensuring AI and machine learning technologies are researched and developed to help humanity adopt and use these systems in ethical and responsible ways. 
  • The size, diversity, intricacy, and level of technological independence of AI systems necessitate reevaluating laws, regulations, and policies. We employ an analytical model consisting of 3 layers to represent the complexity of AI governance.

Artificial intelligence, or AI, is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. Al, from a technical perspective, is not a single technology, but rather a set of techniques and sub-disciplines ranging from areas such as speech recognition and computer vision to attention and memory, to name just a few.

From a phenomenological perspective, however, the term AI is often used as an umbrella term to refer to a certain degree of autonomy exhibited in advanced health diagnostic systems, next-generation digital tutors, self-driving cars, and other A-based applications share. Often, such applications, in turn, impact human behaviour and evolve dynamically in ways that are at times unforeseen by the systems’ designers.

What is AI governance?

Artificial intelligence (AI) governance refers to the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.

AI governance encompasses oversight mechanisms that address risks like bias, privacy infringement and misuse while fostering innovation and trust. An ethical AI-centered approach to AI governance requires the involvement of a wide range of stakeholders, including AI developers, users, policymakers and ethicists, ensuring that AI-related systems are developed and used to align with society’s values.

Governance aims to establish the necessary oversight to align AI behaviours with ethical standards and societal expectations and to safeguard against potential adverse impacts.

AI governance is essential for reaching a state of compliance, trust and efficiency in developing and applying AI technologies. With AI’s increasing integration into organisational and governmental operations, its potential for negative impact has become more visible. High-profile missteps like the Tay chatbot incident (link resides outside ibm.com), where a Microsoft AI chatbot learned toxic behaviour from public interactions on social media and the COMPAS (link resides outside ibm.com) software’s biased sentencing decisions have highlighted the need for sound governance to prevent harm and maintain public trust.

The layered model

One of the key tools for managing complex systems is modularity. By distinguishing between tasks that require extensive interdependency and those that do not, modularity seeks to minimise the number of interdependencies that need to be analysed. A specific type of modularity known as 15ayering is characterised by the arrangement of various system components into parallel hierarchies.

There is a four-avers model to illustrate the nature of cyberspace: first, the participants in the cyber-experience; second, the information that is transmitted, stored, and transformed in cyberspace. Thirdly, the services consist of logical building blocks, and fourthly, the physical foundations uphold the logical elements.

We attempt to capture the complex nature of AI governance by using an analytical model with 3 layers.

Also read: AI: The opportunities and the threats

1. The technical layer

The algorithms and data that form the basis of the AI governance ecosystem are found in the technical layer. Whether they are software-based (like criminal justice or medical diagnostic systems, or intelligent personal assistants) or physical (like commercial robots and self-driving cars), AI and autonomous systems depend on data and algorithms. A set of guidelines for responsible algorithms was created as part of a Dagstuhl Seminar on “DataResponsibly,” along with a suggested social impact statement. The following are the suggested guiding principles for socially responsible algorithms: accountability, explainability, accuracy, suitability, and fairness. Data governance, or the process of collecting, using, and managing data by Al algorithms, should adhere to principles that uphold equity and prevent discrimination based on race, colour, national origin, religion, sex, gender, sexual orientation, or disability. 

Also read: What is Perplexity AI?

2. The ethical layer

Above the technical level, we could discuss broad ethical issues that concern all kinds of AI systems and applications. Human rights principles are a significant source for the development of such ethical principles. The IEEE general principles for AI and autonomous systems are another illustration of how Al ethics norms are starting to take shape. Algorithm-driven actions can be evaluated using moral standards and precepts. The ethical principle of equal or fair treatment would be broken, for example, if an AI application studied the data of an insurance company and charged a particular group of people higher premiums because of factors like gender or age.

3. The social and legal layer

The process of establishing institutions and assigning roles for regulating AI and autonomous systems could be covered by the social and legal layer. Put another way, a policymaking body would have the authority to define Al, make exceptions that would let researchers do AI research in specific settings without being strictly liable for their actions, and set up a certification procedure for AI. The principles and standards that come from the technical and ethical layers, as well as more general national and international legal frameworks, such as those pertaining to human rights, can serve as a foundation for particular norms intended to regulate AI. To define appropriate behaviour for AI and autonomous systems, the layered model offers a framework for thinking about AI governance.

AI and algorithmic decision-making systems can have their governance structures implemented using a combination of multi-layered and multi-layered approaches. Here, we outline a few of these layers, keeping in mind that some would only be taken into account if the risk associated with specific Al applications was significant and verifiable. Governance procedures can be used on a national or international level and can range from government-based structures to market-oriented solutions.


Fiona Huang

Fiona Huang, an intern reporter at BTW media dedicated in Fintech. She graduated from University of Southampton. Send tips to f.huang@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *