- The legal and regulatory framework for AI governance is a key part of ensuring that the development and application of AI technologies are ethical, legal and socially desirable.
- The ethical and social implications of AI governance are key aspects of ensuring that the development and application of AI technologies are ethical and respect human rights and social values.
- The international community is working to establish a common AI governance framework and standardisation system to promote the responsible and sustainable development of AI technologies.
AI governance involves managing and regulating the development, application and impact of artificial intelligence (AI) systems. It aims to address the ethical, legal, social and policy challenges emerging from the development of AI technology to ensure that the development and application of AI technology meet ethical, legal and societal requirements, promotes the sound development of AI technology and maximises its potential benefits.
Also read: 3 key tech governance organisations, and what they do
Legal and regulatory framework
The legal and regulatory frameworks for AI (artificial intelligence) governance cover several aspects, including data privacy protection, accountability and transparency, fairness and discrimination prohibitions, and regulatory and review mechanisms.
The establishment and implementation of these frameworks are critical to ensuring the healthy development and adoption of AI technologies, aiming to protect the rights of individuals, safeguard the public interest, and regulate the development, deployment, and use of AI systems.
Data privacy is an important aspect of the legal and regulatory framework for AI governance. In many countries and regions, data protection laws and regulations are in place, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the US.
These laws regulate how personal data is collected, processed and used, and require organisations and businesses to protect the privacy of users and provide transparent data handling policies and practices.
AI governance frameworks need to clarify the accountability and transparency of AI systems. This includes establishing the responsibilities of developers and users, making them accountable for the behaviour and outcomes of AI systems, and providing transparent decision-making processes and operational mechanisms.
AI governance frameworks also need to guarantee the fairness of AI systems and prohibit discrimination. This means that AI systems cannot be designed and applied to produce discriminatory results based on factors such as race, gender, age and sexual orientation.
Regulatory and review mechanisms for AI governance include the establishment of specialised regulatory bodies or departments responsible for overseeing and reviewing the development and use of AI systems, and imposing penalties and sanctions for non-compliance.
Also read: What is internet governance?
Ethical and social implications
The ethical and social implications of AI governance cover a wide range of aspects, including issues of fairness and discrimination, transparency and interpretations, privacy protection and individual rights, as well as employment and social structural change.
The resolution of these issues requires the joint efforts of governments, enterprises, academia and all sectors of society to formulate appropriate policies and measures to guide the development and application of AI technologies and ensure that they are ethical and respect human rights and social values.
As the decision-making process of AI systems is usually based on the analysis of large amounts of data and pattern recognition, AI systems may produce unfair or discriminatory results if these data reflect real-world biases and inequalities. Therefore, there is a need to ensure that AI systems are designed and trained on data that is fair and diverse to avoid discriminatory outcomes.
The decision-making processes of AI systems are often complex black-box models that lack transparency and interpretability. This means that users are unable to understand how AI systems work and their decision logic, and are unable to explain the results of AI systems’ decisions.
To improve the transparency and interpretability of AI systems, measures need to be taken to make the decision-making process of AI systems explainable and understandable so that users can understand and trust the decisions of AI systems.
AI systems often require large amounts of data for training and optimisation, and there is a need to ensure that the collection, processing and use of personal data are in line with privacy protection laws and regulations, and respects the rights of individuals and their right to make their own choices. Measures also need to be taken to protect the security of personal data and prevent data leakage and misuse.
The widespread use of AI technologies may have a profound impact on employment and social structure. Certain industries and occupations may face automation and substitution, leading to job losses and changes in occupational structure.
Measures are therefore needed to address these changes, such as providing skills training and job-transfer support to facilitate the reallocation of human resources and job creation.
International cooperation and standardisation
The increasingly cross-border and global nature of AI technology requires the international community to work together to establish an international cooperation mechanism and standardisation system to jointly address the challenges and risks in the development of AI technology.
To promote the development and application of AI technology on a global scale, international standards and guidelines need to be formulated to establish unified technical specifications and industry standards.
This includes standards on data privacy protection, transparency, interpretability, accountability and transparency to ensure that the development and application of AI technologies are in line with globally shared principles and values.
ISO (International organisation for standardisation) has done extensive work in the area of AI governance, developing a series of international standards and guidelines related to AI. For example, the ISO/IEC JTC 1/SC 42 Committee is responsible for developing AI-related international standards, including standards for quality assessment and testing of AI systems, data privacy protection, transparency and interpretability, accountability and transparency.
International cooperation also includes promoting information sharing and experience exchange and strengthening cooperation and collaboration between international organisations and multinational enterprises. This can be done by organising international conferences, seminars, and workshops, establishing international alliances and cooperation mechanisms, sharing best practices and successful experiences, and enhancing the understanding of and response to challenges and risks in the development of AI technologies.
The Global AI Ethics Alliance is a coalition of international organisations and multinational corporations dedicated to promoting AI ethics and social responsibility worldwide. The Alliance promotes the responsible and sustainable development of AI technologies through the development of AI ethical codes and guidelines, cross-border dialogue and cooperation.