AI knowledge bases that don't lie to your customers.

LLMs are great at sounding confident. They are not great at being right. If you're putting an AI agent in front of paying customers, this is the work that has to happen before the launch, not after the first bad screenshot ends up on Twitter.

Every agency I work with wants to "add AI." Most of them want a chatbot that can answer customer questions. Some of them want a voice agent that can book appointments. A few of the more ambitious ones want both, plus an autonomous workflow that takes whatever the AI captures and routes it intelligently.

The thing they almost never want to do is the part of the project that determines whether any of this actually works: building a real knowledge base.

I've helped enough teams launch AI agents to have an opinion on this. Here it is: the LLM is the easy part. The model is borrowed. The interface is borrowed. The hard, valuable, defensible part is the knowledge base. That's also the part that gets the least attention.

Why this matters

Large language models are extraordinary at sounding confident. They generate fluent, grammatically correct, contextually appropriate text. They do this whether or not the text is accurate.

In a support context, this is a problem. A model that confidently tells your customer the wrong refund policy, or invents a feature that doesn't exist, or quotes the wrong price, is worse than no AI at all. The customer trusts a confident-sounding answer. The team picks up the pieces later.

The fix is not "use a better model." The fix is retrieval-augmented generation: ground the model in a knowledge base of your own facts, and constrain it to use those facts. Done well, this is the difference between an agent that answers correctly 95% of the time and one that hallucinates twice an hour.

What a real knowledge base looks like

"We pasted our help docs into a vector store" is not a knowledge base. It's a search index. There's a difference.

A real knowledge base, in this context, has these properties:

  1. Atomic articles. One concept per article. Not "Getting Started" with twelve subsections — twelve focused articles, each answering one question.
  2. Question-shaped titles. Articles named the way customers actually ask questions, not the way internal teams categorize features. "How do I cancel my subscription?" not "Subscription Lifecycle Management."
  3. Explicit out-of-scope sections. When an article covers a topic, it also lists what it doesn't cover, with pointers to other articles or to "talk to a human." This is the single highest-leverage move for cutting hallucinations.
  4. Last-updated dates that mean something. Stale content in the KB becomes stale answers from the AI. If you don't trust an article enough to date it, the model shouldn't be using it.
  5. Tone guidance baked in. Each article should be written in the voice the agent should reply in. The model echoes what it's grounded in.

That's the shape. Most help docs in the wild fail at least three of these. Which is why most AI knowledge bases stitched together from existing help docs perform worse than expected.

The pre-flight checklist

Before letting a model talk to a customer, I run through this list with the team:

  • Is every article on a single topic, or are some doing double duty?
  • Are there contradictions between articles? (Two different prices, two different refund windows, two different feature descriptions.)
  • Are there topics the agent should never answer? Legal advice, medical advice, anything tied to compliance — these need explicit guardrails, not "the KB doesn't have anything on this."
  • Do we have a "I don't know — here's how to reach a human" path that the agent can fall back to?
  • Are dates on articles current? (If the most recent article was edited in 2022, the answers will reflect 2022.)
  • Have we written what the agent should NOT say? (Outdated pricing, deprecated features, anything still in beta that hasn't shipped.)

If the answer to any of these is "no" or "we'll figure that out after launch," the launch should slip.

The prompt is the smaller half of the work

I see teams pour energy into prompt engineering and almost none into knowledge base curation. That ratio is backwards. The prompt sets the agent's tone and personality. The knowledge base determines whether anything it says is true.

A good prompt for a support agent is short. It tells the model who it is, what it's allowed to do, and what to do when it doesn't know. Something like:

You are a support agent for [Company]. You only answer
questions that are clearly addressed in the provided
knowledge base.

If a customer asks about something not covered, say so
plainly and offer to connect them with a human teammate.

Do not invent features, prices, or policies. If you are
not sure, say you are not sure.

Reply in a warm, concise tone. Avoid filler.

That's most of it. The hard work — the work that makes the agent worth deploying — happens in the knowledge base.

The launch posture

When I launch an AI agent with a client, the posture is always the same:

  1. Soft launch behind a "Beta" label. Customers know they're talking to AI. The "talk to a human" button is one tap away. Always.
  2. Every conversation gets reviewed for the first 2–3 weeks. By a human. Looking for: wrong answers, confidently-said-but-uncertain answers, topics the agent shouldn't have engaged with, and topics customers wanted that the KB didn't cover.
  3. The KB grows. Every review session generates a small list of new articles or edits. The list goes to the person who owns the KB, not to "someone."
  4. Confidence comes from data, not intuition. Until you have hundreds of reviewed conversations showing the agent is reliable, treat it as in-training. After that, you can quietly remove the "Beta" label and the safety net can shrink.
// what i tell every client

The model is the cheapest part of this project. The model is rented. The defensible thing — the thing that will actually make your agent better than your competitor's — is the curated knowledge base nobody else has. Invest there.

The honest summary

"Add AI" is not a project. "Curate a knowledge base good enough to be the spine of an AI support agent, then bolt the agent on" is a project. The first one fails publicly. The second one usually works.

If you're sitting on a SaaS product and you're thinking about putting an AI agent in front of customers, do the unsexy thing first. Audit your KB. Atomize the articles. Write the out-of-scope sections. Then bring in the agent. You'll save yourself a lot of apologetic emails.


If you're partway through an AI support rollout and the customer-facing answers aren't holding up, I've done enough of these to spot the usual failure modes quickly. Get in touch if you want a second pair of eyes.