Despite the potential of what AI promises users around, there is still a challenge with building trust between organizations who build these tools and users who are expected to utilize them.
According to a report by Dentsu Aegis Network in 2019, misuse of personal data remains the top reason for consumer distrust with tech companies.
Speaking on how to solve the problem of distrust in Artificial Intelligence, Joanna Bryson, Professor of Ethics and Technology, Hertie School, at the Annual Meeting of the New Champions (AMNC23) said that there is a need for transparency and access to information, recognizing that people find out about things in various ways.
She noted that an example of transparency can be implemented with codes behind the AI products released to the public.
The professor, who is also a programmer, revealed that it’s proper to have an “open window” to allow people to understand the implications of the code behind the product they use.
However, she hinted that not everything should be open source. “That doesn’t mean you have to open source everything or let people see all your code,” she said.
“We can still have competitive secrets and advantages, but we need to at least know what there is to know and what you need to do to be able to find out about that.”
She added that tech firms should become flexible in terms of allowing for their products to be audited like is seen in banks. Auditing breeds transparency which in turn improves the level of trust users will have on AI products.
Collaboration between government and AI firms
Joanna also said that to build trust, there is a need to foster collaboration between government and AI firms. “We need to be talking about the people who are building AI and the people who are regulating it.”
It’s important for governments and AI firms, working for the benefit of the people, to help the users understand the value and actions taken on their behalf.
Speaking further about this, she reverted to the fact that during the pandemic, populism decreased in many countries as people realized the government’s role in addressing crises. She believes that the same approach can be implemented in the use of AI to build trust.
Minimize political polarization
The last step she mentioned to build trust in the use of AI is for political systems to minimize political polarization. If the majority of the society where an AI tool is to be implemented hates the government, “then they aren’t going to trust you.”
She also encouraged viewing the age of AI as an opportunity for knowledge, awareness, and transparency, not only in technology but also within culture.
Utilize the knowledge of experts real-time
Darko Matovski, CEO of CausaLens, also added that it’s important to use the knowledge and expertise of professionals in various fields that will require the use of AI. He said this because he believes that individuals tend to trust domain experts more than government or regulations.
Therefore, by allowing the experts of a particular field to directly contribute their knowledge to the algorithm, users can have more trust on what is being built. He revealed that in the past, AI systems have primarily relied on historical data stored in databases while neglecting the expertise of professionals.
The need for education and enlightenment
Another important aspect of building trust in the AI industry is educating users of what AI is and how to engage with products released into the market. Liu Jiren, Chairman, Neusoft Corporation, stated this factor saying that his company does not see itself as only a tech company.
He noted that they have become “a kind of an educator” to help their users understand what products they are Neusoft is creating.