While Artificial Intelligence (AI) has undoubtedly made waves in various industries, its widespread adoption continues to face formidable obstacles.
In shedding light on the challenges surrounding AI adoption and underscoring its value, Darko Matovski, the CEO of CausaLens, offered an intriguing perspective. He drew attention to a critical issue that reveals a staggering statistic: a vast majority of AI products developed never see the light of day, with a whopping 85% of AI projects remaining confined to the lab. Darko asserts that the main factor contributing to this trend is rooted in a single crucial element.
According to Darko, the lack of trust in algorithms stands as the primary stumbling block. He emphasized that the conventional method of feeding data into a “black box” and obtaining results without comprehending the decision-making process impedes the widespread acceptance of AI in today’s society.
Using casual AI to drive adoption
According to Darko, in order to facilitate the adoption of AI, it is important for AI systems to be capable of explaining their decisions and actions, even in situations where historical data is not available.
During the Annual Meeting of the New Champions (AMNC23) conference, he emphasized the need to explore the field of causal AI.
Causal AI, also referred to as causal inference or causal reasoning in artificial intelligence, delves into understanding and analyzing cause-and-effect relationships within complex systems. It involves identifying the factors that contribute to specific outcomes or events.
While traditional AI systems primarily focus on correlation, which identifies statistical relationships between variables without establishing causality, causal AI aims to surpass correlation.
Its objective is to uncover causal relationships, enabling a deeper comprehension of how different variables interact and influence one another.
Implication with the EU AI regulation
He commended the EU AI regulatory framework released in 2021 for its recognition of the significance of comprehending cause and effect in ensuring the practical application of models.
The speaker stressed that decisions affecting human well-being necessitate an understanding of why AI systems make specific choices, as well as the capability to handle novel data points.
As an illustration, he pointed to the pandemic, highlighting how traditional models faltered when confronted with unprecedented circumstances.