Troubleshooting AI Issues
Overview
While the Zaapi AI Agent is powerful, its performance depends entirely on the quality of its training. If you find your AI is making mistakes, giving strange answers, or not behaving as expected, it's almost always an issue that can be fixed by refining its training data.
This guide will help you diagnose and resolve the most common issues.
Your First Step: Use "Show Thinking" to Diagnose
Before you can fix a problem, you need to understand why it's happening. The most important tool for this is the "Show thinking" feature on the Test page.
When your AI gives a response, click the "Show thinking" button beneath it. This will reveal the AI's step-by-step reasoning:
It will show you if it tried to find a Scenario and whether it was successful.
It will show you which specific piece of a Knowledge Source it used to formulate the answer.
It will detail the guidelines it followed to generate the final response.
By reviewing this, you can immediately see why the AI gave a particular answer, which is the key to fixing it.
Common Problems and How to Fix Them
1. The AI Gives Incorrect or Outdated Information
The Problem: A customer asks about your return policy, and the AI gives information that is two years old.
Likely Cause: The AI is pulling from a poorly structured or outdated knowledge source. The information might be buried in a large, unfocused paragraph, making it hard for the AI to isolate the correct detail.
How to Fix It:
Use "Show thinking" to identify the exact knowledge source being used.
Review that document for clarity, structure, and accuracy. Ensure it uses clear headings and follows our recommended guidelines.
For a complete guide on structuring your documents, please see: Best Practices for AI Knowledge Sources.
2. The AI Doesn't Follow a Scenario Correctly
The Problem: You have a scenario for "Refund Requests," but when a customer says "I want my money back," the AI gives a general answer from the knowledge base instead of following your step-by-step instructions.
Likely Cause: The trigger description in your scenario is not broad enough to catch the customer's phrasing.
How to Fix It:
Go to AI Agent > Train > Scenario handling and edit the relevant scenario.
In the "When this scenario should trigger" field, add more variations of how a customer might ask. For example, instead of just "Customer asks for a refund," try "Customer is unhappy and wants a refund, asks for their money back, or says their order was not as expected." The more descriptive you are, the better the AI can match the intent.
3. Website Knowledge Source is Inaccurate
The Problem: You've added your website's FAQ page as a knowledge source, but the AI is pulling in irrelevant text from menus or sidebars, or the formatting is messy.
Likely Cause: Some website structures are too complex for the AI to scrape cleanly. It can get confused by navigation bars, footers, and pop-ups.
How to Fix It:
Instead of scraping the website directly, it's much more reliable to create a knowledge source manually. Copy the text from your website and paste it into a Word document (
.docx
) or directly into the "Write it yourself" editor. This allows you to structure it perfectly with clear headings, ensuring the AI only learns the information you want it to.
A Note on AI vs. Flow Builder Chatbots
It's important to remember that the AI Agent is not a traditional, rule-based chatbot.
Flow Builder: Use this when you need the chatbot to follow a very specific, rigid script every single time. It's predictable and perfect for linear processes like lead qualification.
AI Agent: Use this when you want the chatbot to have natural, dynamic conversations. The goal is not to force it to say a specific script, but to give it high-quality information and let it use its intelligence to formulate the best answer.
If you find yourself trying to make the AI follow an exact word-for-word script, you may be better off using the Flow Builder for that specific task.
What Are "Hallucinations" and What to Do About Them?
Occasionally, an AI can "hallucinate"—meaning it provides an answer that seems to come from nowhere and is not based on its training data. This is a rare but known characteristic of Large Language Models.
If it's a one-off event: It might just be an unpredictable glitch. The best course of action is to monitor the situation. It may not happen again.
If the issue is consistent: If the AI repeatedly hallucinates or provides the same incorrect information, it points to a deeper issue in its training.
If you experience a persistent issue that you cannot resolve by refining your training data, please contact our support team for assistance.
Last updated
Was this helpful?