AI chatbot hallucination problem is huge, here is how tech companies are facing the challenge
One of the fundamental challenges with large language models (LLMs) has been the huge problem of AI hallucinations, which is proving to be a major bottleneck in its adoption. Know how tech companies are tackling the problem.
There is no doubt that generative artificial intelligence (AI) has proven itself to be a revolutionary technology. But we are still scratching the surface of what this technology is capable of. Just like any technology, it is bound to get more powerful and impactful with further research and its integration into existing technologies. However, one of the major challenges both AI researchers and tech companies building AI tools are facing is the problem of AI hallucination which is slowing its adoption and reducing the trust users have on them.
What is AI hallucination?
AI hallucinations are essentially incidents when an AI chatbot gives out an incorrect or nonsensical response to a question. Sometimes, the hallucinations can be blatant, for example, recently, Google Bard and Microsoft's Bing AI falsely claimed that there has been a ceasefire in Israel during its ongoing conflict against Hamas. But other times, it can be subtle to the point users without expert-level knowledge can end up believing them.
We are now on WhatsApp. Click to join.
The root cause of AI hallucinations
AI hallucinations can occur in large language models (LLMs) due to various reasons. One of the primary culprits appears to be unfiltered huge amounts of data that are fed to the AI models to train them. Since this data is sourced from fiction novels, unreliable websites, and social media, they are bound to carry biased and incorrect information. Processing such information can often lead an AI chatbot to believe it as the truth.
Another issue is problems with how the AI model processes and categorizes the data in response to a prompt, which can often come from users without the knowledge of AI. Poor-quality prompts can generate poor-quality responses if the AI model is not built to process the data correctly.
What are companies doing to solve the AI hallucination bottleneck?
Whenever a new technology emerges, it comes with its own set of problems. This is true for any technology. So, in that respect, AI is no different. What has differentiated it from other such technologies was the initial speed of deployment. Usually, technologies are not deployed till all loose screws have been tightened. However, due to the huge popularity of AI ever since OpenAI launched ChatGPT in November 2022, companies did not want to miss out on the hype and wanted their products in the market as soon as possible.
But now, many companies are realizing the mistake and are working on creating more trustworthy generative AI chatbots. Microsoft is one of them. In September, it announced its Phi-1.5 model, which has been trained on “textbook quality” data instead of traditional web data, to ensure the data that is being fed is devoid of inaccuracies.
Another solution has been put forth by an Oslo-based startup, iris.ai. The CTO of the company, Victor Botev, spoke with TheNextWeb recently and suggested that another way to solve the issue of AI hallucination is to train a model on coding language. Botev believes that since human written text is prone to biases, coding language is a better alternative as it is based on logic and leaves very little room for interpretation. This can give the LLMs a structured way to combat inaccuracies.
It is still early days, and as researchers and tech companies get more familiar with AI tools, more effective solutions to make AI accurate and more trustworthy to the general public will also emerge.
One more thing! HT Tech is now on WhatsApp Channels! Follow us by clicking the link so you never miss any update from the world of technology. Click here to join now!