Audit of AI: Needs to be done so that it is fair for all parties

Share:

● If not audited, AI will harm many parties.

● AI can not only imitate work, but can also harm society.

● AI audit rules are important to provide security and fairness assurance to all parties.


The use of artificial intelligence (AI) has now become an integral part of human life. While AI is incredibly practical, it’s precisely because of this that its use often goes too far. Therefore, it’s crucial to audit the internal workings of AI systems as part of transparency, accountability, and technological security.

AI can produce biased output and can carry a number of serious risks to basic rights, health, and even human safety.

In the business sector, for example, AI makes it incredibly easy for users to create advertising concepts using only text commands. However, AI-generated content often mimics other people’s work, lacks character, and lacks originality.

In certain cases such as credit scoring , AI output can be biased and potentially discriminate against certain groups and prevent them from obtaining basic rights.

Another risk is system failure, which could lead to accidents. In the transportation sector, cases of autonomous car accidents while driving in automatic mode demonstrate how AI system failures can be fatal.

Furthermore, certain subsets of AI, such as large language models (LLMs), are also susceptible to hallucinations —a condition in which AI produces inaccurate, even accusatory , information . This has the potential to fuel the spread of misinformation, especially among people with low digital literacy.

The urgency of AI audits and the rules of the game

These risks highlight the urgent need for mechanisms to ensure AI operates transparently, accountably, and safely. One mechanism currently gaining global adoption is auditing of internal AI systems.

Audits of the internal working systems of AI include various examinations of AI components, starting from data as input, algorithms, to the resulting output.

Ideally, audits should cover the entire AI lifecycle, from design to public use. The risks described above can arise at any stage of AI development. For example, bias can arise from unrepresentative data, discriminatory algorithm design, or other factors.

Unfortunately, to date, Indonesia does not have regulations that specifically require AI audits.

The ITE Law , as the legal basis for the implementation of electronic systems, does contain several provisions that could serve as entry points for AI audits. However, these provisions are very general for electronic systems.

Although AI is classified as an electronic system, AI has special characteristics with all the technological complexities within it, so the provisions in the ITE Law cannot specifically cover AI.

Similarly, the Consumer Protection Law regulates consumers’ rights to secure and transparent information. This regulation can serve as a foundation for AI audits, but it remains insufficient without operational provisions for their implementation.

The same applies to the Personal Data Protection Law , which regulates the principles and obligations of transparency in the processing of personal data. Because personal data can be input into AI systems, this regulation is relevant as a foundation.

As is the case with the Consumer Protection Law, the principle and obligation of transparency are not strong enough to serve as a basis for AI audits without operational provisions that specifically require it.

Potential conflict with IPR protection

Audits are crucial for ensuring fair and safe AI. However, this process can also conflict with the interests of AI companies, particularly regarding intellectual property (IPR) protection.

This is because AI and its various components can be subject to intellectual property rights. For example, datasets and algorithms are trade secrets that AI providers strictly protect to maintain their competitiveness. Therefore, it’s not surprising that businesses are reluctant to provide access to this information, which they consider vital.

In such situations, a conflict arises between public interests in transparency, accountability, and security of AI and private interests in protecting IPR.

Although legal doctrine generally holds that public interest prevails over private interest, the rights of business actors must still be respected. Therefore, a balance must be found between the two interests.

Third party independent audit and audit results register

Referring to global practices such as the European Union in the EU AI Act and the EU Digital Services Act and China in the Algorithmic Recommendations Provisions , Indonesia needs to regulate AI audit obligations more specifically.

Not all AI requires auditing. Only high-risk AI, such as those that threaten fundamental human rights, health, or the safety of human life, is subject to audit. Therefore, it is necessary to classify AI systems based on their risk—from high to low—to ensure proportionate oversight.

The government should not act as a direct auditor. This is because, in addition to ensuring greater independence, government capacity and resources are currently limited. Audits should be conducted by a certified, independent third party.

To ensure the confidentiality and protection of AI providers’ businesses, the regulation must also include a duty of confidentiality —the obligation to maintain the confidentiality of information in professional relationships—from parties involved in the audit.

Equally important, the government must also provide an audit results registration system, such as the algorithm registry implemented in China, to ensure transparency.

Not all information related to audit results needs to be made public. Confidential information must still be kept confidential.

In this way, the public interest in AI safety and transparency can be balanced with business interests, without sacrificing either.

Author Bio: M. Irfan Dwi Putra is a Junior Researcher at Center for Digital Society (CfDS) at Gadjah Mada University

Tags: