• AI chatbots have evolved from simple text-based chatbots to versatile tools capable of multiple tasks.
  • A chatbot like ChatGPT can sometimes fool the user with inconsistent information when it doesn’t have enough knowledge of the prompt asked.
  • These are called AI hallucinations, which can be reduced with certain hacks and tricks.

Artificial Intelligence (AI) is undoubtedly one of the most transformative innovations of the 21st century, which revolutionised industries and reshaped the way we interact with technology. At the forefront of this technological revolution is OpenAI’s ChatGPT. Powered by the impeccable GPT-3.5 architecture, ChatGPT possesses an unparalleled capacity to understand and generate human-like text. What started as a simple text-based chatbot, is now a versatile tool for tasks ranging from natural language processing to creative writing.

AI model hallucinations

Although a tech genius, ChatGPT’s knowledge extends up until September 2021 only. It can’t help with queries beyond that time period as the chatbot is restricted in its resources. In this situation, what happens sometimes is that ChatGPT tends to fool the user with false, irrelevant or even nonsensical answers. This is called “hallucination.” It happens when the AI model generates something that is not based on reality or logic.

Also Read: ChatGPT can now See, Hear, and Speak – Here’s How to Use It

So, does this make the AI chatbot unreliable? Only to a certain extent – when you rely on AI chatbots for important decisions or information. Imagine if you asked it for financial tips, and it suggested you to invest in a Ponzi scheme. Or if you asked it for historical facts, and it made up some events that never happened.  OpenAI co-founder John Schulman says, “Our biggest concern was around factuality, because the model likes to fabricate things.

But there are ways to improve accuracy and help fix or prevent these hallucinations and get the most out of ChatGPT. And these tips can be applied across all chatbots, whether it’s ChatGPT, Bing Chat, Bard, or Claude. Read on to know more:

AI chatbot chatGPT

Avoid being vague, use direct language

One of the main causes of hallucination is ambiguity. When you use complex or vague prompts, the AI model may not understand what you want or what you mean. It may try to guess or fill in the gaps, resulting in inaccurate or irrelevant responses. To avoid this, you should use simple and direct language when you communicate with AI chatbots. Make sure your prompts are clear, concise, and easy to understand. Avoid using jargon, slang, idioms, or metaphors that may confuse the AI model.

For example, instead of asking an AI chatbot “What’s the best way to stay warm in winter?”, which could have many possible interpretations and answers, you could ask “What are some types of clothing that can keep me warm in winter?”, which is more specific and straightforward.

Give AI a specific role

AI has the tendency to make up stuff when it does not have a clear sense of purpose. It may try to imitate human behaviour or personality, which can lead to errors – like trying to ‘impress’ you by making up things that are not true or realistic. To prevent this, you should give the AI a specific role and tell it not to lie. A role defines what the AI model is supposed to do or be, such as a teacher, a friend, a doctor, or a journalist. This also sets some expectations and boundaries for the AI model’s behaviour and responses.

For example, if you want to ask an AI chatbot about history, you could say “You are a brilliant historian who knows everything about history and you never lie. What was the cause of World War 1?”. This way, you are telling the AI model what kind of knowledge and tone it should use, and what kind of answer it should give.

Give context

Another way to reduce ambiguity is to provide some context in your prompts. Context helps the AI model narrow down the possible outcomes and generate a more relevant and appropriate response. You can include information such as your location, preferences, goals, or background.

For example, instead of asking an AI chatbot “How can I learn a new language?”, which is a very broad and open-ended question, you could ask “How can I learn French in six months if I live in India and have no prior knowledge of French?”, which gives the AI model more details and constraints to work with.

chatgpt prompt

Limit the possible outcomes

Another reason for hallucination is that ChatGPT, or any other chatbot, has too many options or possibilities to choose from. It may generate something random and unrelated to your prompt or contradictory and inconsistent with previous responses. To avoid this, you should limit the possible outcomes by specifying the type of response you want. You can do this by using keywords, formats, examples, or categories that guide the AI model towards a certain direction or goal.

For example, if you want to ask an AI chatbot for a recipe, you could say “Give me a recipe for chocolate cake in bullet points”. This way, you are telling the AI model what kind of content and structure it should use for its response.

Put in relevant data and sources unique to you

One of the best ways to prevent a chatbot from giving out misinformation is to provide relevant data and sources unique to you in your prompts. Data and sources can include facts, statistics, evidence, personal information, experiences or references that support your prompt or question. making it more specific and unique. By doing so, you are giving the AI model more context and information to work with. You are also making it harder for it to generate something that is generic or inaccurate.

For example, if you want to ask an AI chatbot for career advice, you could say “I am a 25-year-old software engineer with three years of experience in web development. I want to switch to data science, but I don’t have any formal education or certification in that field. What are some steps I can take to make the transition?”. This way, you are giving the AI model more details about your situation and goal, and asking for a specific and realistic solution.

These tips would only reduce the number of hallucinations but not completely eliminate them, so it’d be wise to keep fact-checking the output irrespective.