Back to blog
tutorialragchatbot

Building a Local RAG Chatbot with ZSE

ZSE TeamFebruary 21, 202610 min read

Building a Local RAG Chatbot with ZSE

Build a chatbot that can answer questions about your documents using ZSE's built-in RAG features.

What We're Building

A chatbot that:

1. Indexes your PDF/text documents

2. Retrieves relevant context for questions

3. Generates accurate answers using an LLM

Step 1: Prepare Your Model

zse convert Qwen/Qwen2.5-7B-Instruct -o qwen-7b.zse

Step 2: Index Documents

from zllm_zse import ZSE, RAGIndex

Load model

model = ZSE("qwen-7b.zse")

Create index

index = RAGIndex(embedding_model="sentence-transformers/all-MiniLM-L6-v2")

Add documents

index.add_documents([

"docs/manual.pdf",

"docs/faq.txt",

"docs/api-reference.md"

])

Save index

index.save("my_knowledge_base")

Step 3: Query with Context

Load index

index = RAGIndex.load("my_knowledge_base")

Ask a question

question = "How do I reset my password?"

context = index.search(question, top_k=3)

Generate answer with context

response = model.chat([

{"role": "system", "content": f"Answer based on this context:\n{context}"},

{"role": "user", "content": question}

])

print(response)

Step 4: Run as API Server

zse serve qwen-7b.zse --rag-index my_knowledge_base --port 8000

Now your API automatically retrieves context for each query!

Tips for Better RAG

1. **Chunk size matters** - Try 512-1024 tokens per chunk

2. **Use hybrid search** - Combine semantic + keyword search

3. **Add metadata** - Filter by document type/date

4. **Tune retrieval** - More context isn't always better (3-5 chunks)

Your documents stay local - nothing leaves your machine.

Related Posts