Reliance on AI does not get rid of bias

An expert said mitigating AI bias can start by regulating local data, a crucial step currently lacking in Malaysian AI.

WALA ABDUL MUIZ
WALA ABDUL MUIZ
01 May 2024 10:00am
Photo for illustration purposes only. - 123RF
Photo for illustration purposes only. - 123RF

SHAH ALAM - Despite the vast amount of data that Artificial Intelligence (AI) possesses, its reliability is often compromised due to inherent limitations, including biases in information retrieval that can affect users.

Malaysian Research Accelerator for Technology and Innovation (Mranti) General Manager Dr Afnizanfaizal Abdullah said mitigating AI bias can start by regulating local data, a crucial step currently lacking in Malaysian AI.

The technology and innovation expert said the usage of AI in the country as of today only amounts to 20 per cent, but will increased by 50 to 60 per cent in the next three years.

"We developed users' data from the United States and the United Kingdom for the AI that we use here, which is not fit for Malaysia.

"The specific data that we use has not been localised and may not specifically come from the place that they are going to deploy and developing this data is the first thing we need to do," he told Sinar Daily.

He said the data should be regulated by the government to ensure monitoring and prevent misuse, such as bias in information.

Through the launch of AI Sandbox which is Mranti's project in collaboration with Science, Technology and Innovation Ministry (Mosti) and the Higher Education Ministry, he said its models will be trained with the data that had been compiled from the government.

He highlighted that Malaysian technology is currently working on ethical AI where ethical usage can be preserved.

Related Articles:

"This is the policy that we need to embark on," he said.

Startups like WellAI and Fylix, under the AI Sandbox, were also in favour of recognising AI bias.

WellAI co-founder Cheng Wai Kok said their AI product focused on medical screening for common diseases such as heart disease and diabetes.

They extract data from bell curves and exclude extreme data points.

He also explained how in medical AI, people tend to practice AI bias by creating data while predicting clients to be in favour of their products and services.

Fylix co-founder Dr Aznul Qalid Mohammad Sabri said they employed specialised AI development software to ensure their product remained unbiased.

This involved multiple stages, with the final one focusing on verifying whether the AI operated as expected.

"We also keep our clients educated about AI and the possibility of bias by ensuring that we do not only talk to them about the good side of our product but also about its limitations," he said.

"Our constant effort in mitigating AI bias is through continuous research and development (R&D) in responsible university departments, considering that our product is a university-based effort," he said, highlighting the absence of human judgement as AI's limitation.

According to an American business news outlet, CNBC, OpenAI and Google were currently employing strategies such as pertaining AI models to large datasets and using human reviewers to fine-tune the models.

These reviewers will then provide feedback on the AI model's outputs and help identify and mitigate biases.