New
Knowledge Q&A System
Build a Q&A system that combines RAG with memory for intelligent document querying.
What We're Building
A knowledge Q&A system that lets users upload documents, search through them using hybrid search, and get AI-powered answers with conversation memory for follow-up questions.
Features
- File Upload: Upload PDFs, text files, and documents to your knowledge base
- Hybrid Search: Combines semantic and keyword search for best results
- Conversation Memory: Remember context for follow-up questions
- Source Citations: See which documents were used to answer
Step 1: File Upload API
app/api/upload/route.ts
import { NextRequest, NextResponse } from 'next/server';
export async function POST(req: NextRequest) {
const formData = await req.formData();
const file = formData.get('file') as File;
if (!file) {
return NextResponse.json({ error: 'No file provided' }, { status: 400 });
}
// Upload to Super Agent Stack RAG
const uploadForm = new FormData();
uploadForm.append('file', file);
const response = await fetch('https://www.superagentstack.com/api/rag/upload', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENROUTER_KEY}`,
'superAgentKey': process.env.SUPER_AGENT_KEY!,
},
body: uploadForm,
});
const result = await response.json();
return NextResponse.json(result);
}Step 2: Query API with Hybrid Search
app/api/query/route.ts
import OpenAI from 'openai';
import { NextRequest, NextResponse } from 'next/server';
const client = new OpenAI({
baseURL: 'https://www.superagentstack.com/api/v1',
apiKey: process.env.OPENROUTER_KEY!,
defaultHeaders: { 'superAgentKey': process.env.SUPER_AGENT_KEY! },
});
export async function POST(req: NextRequest) {
const { question, sessionId } = await req.json();
const response = await client.chat.completions.create({
model: 'openai/gpt-4o-mini',
messages: [
{
role: 'system',
content: `You are a helpful assistant that answers questions based on the provided documents.
Always cite your sources when answering. If you don't know the answer, say so.`
},
{ role: 'user', content: question }
],
// RAG Configuration
useRAG: true,
searchMode: 'hybrid', // Combine semantic + keyword search
keywordWeight: 0.3, // 30% keyword, 70% semantic
ragTopK: 5, // Return top 5 relevant chunks
// Memory Configuration
sessionId: sessionId, // Enable conversation memory
saveToMemory: true, // Save Q&A to memory
});
return NextResponse.json({
answer: response.choices[0].message.content,
// Sources are included in the response metadata
});
}Hybrid Search Benefits
Using
searchMode: 'hybrid' ensures you find documents whether users ask "What's the refund policy?" (semantic) or search for "Section 4.2" (keyword).Step 3: Frontend Component
components/KnowledgeQA.tsx
'use client';
import { useState, useRef } from 'react';
interface Message {
role: 'user' | 'assistant';
content: string;
}
export function KnowledgeQA() {
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const [uploading, setUploading] = useState(false);
const [files, setFiles] = useState<string[]>([]);
const fileInputRef = useRef<HTMLInputElement>(null);
// Persistent session for follow-up questions
const [sessionId] = useState(() => crypto.randomUUID());
const uploadFile = async (file: File) => {
setUploading(true);
try {
const formData = new FormData();
formData.append('file', file);
const res = await fetch('/api/upload', {
method: 'POST',
body: formData,
});
const data = await res.json();
if (data.success) {
setFiles(prev => [...prev, file.name]);
}
} catch (error) {
console.error('Upload error:', error);
} finally {
setUploading(false);
}
};
const askQuestion = async () => {
if (!input.trim() || loading) return;
const question = input;
setInput('');
setMessages(prev => [...prev, { role: 'user', content: question }]);
setLoading(true);
try {
const res = await fetch('/api/query', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question, sessionId }),
});
const data = await res.json();
setMessages(prev => [...prev, { role: 'assistant', content: data.answer }]);
} catch (error) {
console.error('Query error:', error);
} finally {
setLoading(false);
}
};
return (
<div className="flex flex-col h-[600px] border rounded-lg">
{/* File Upload Section */}
<div className="border-b p-4 bg-gray-50">
<div className="flex items-center gap-4">
<input
ref={fileInputRef}
type="file"
accept=".pdf,.txt,.md,.doc,.docx"
onChange={(e) => e.target.files?.[0] && uploadFile(e.target.files[0])}
className="hidden"
/>
<button
onClick={() => fileInputRef.current?.click()}
disabled={uploading}
className="px-4 py-2 bg-gray-200 rounded hover:bg-gray-300"
>
{uploading ? 'Uploading...' : '📁 Upload Document'}
</button>
<div className="flex gap-2 flex-wrap">
{files.map((f, i) => (
<span key={i} className="px-2 py-1 bg-green-100 text-green-800 rounded text-sm">
✓ {f}
</span>
))}
</div>
</div>
</div>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.length === 0 && (
<div className="text-center text-gray-500 mt-8">
<p className="text-lg">📚 Upload documents and ask questions!</p>
<p className="text-sm mt-2">Try: "What are the main topics covered?"</p>
</div>
)}
{messages.map((msg, i) => (
<div key={i} className={`flex ${msg.role === 'user' ? 'justify-end' : 'justify-start'}`}>
<div className={`max-w-[80%] p-3 rounded-lg ${
msg.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100'
}`}>
<pre className="whitespace-pre-wrap font-sans">{msg.content}</pre>
</div>
</div>
))}
{loading && <div className="text-gray-500">🔍 Searching documents...</div>}
</div>
{/* Input */}
<div className="border-t p-4 flex gap-2">
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && askQuestion()}
placeholder="Ask a question about your documents..."
className="flex-1 border rounded px-3 py-2"
/>
<button
onClick={askQuestion}
disabled={loading}
className="px-4 py-2 bg-blue-500 text-white rounded"
>
Ask
</button>
</div>
</div>
);
}Search Mode Comparison
Vector Search
"What's the return policy?"
✓ Finds "refund procedures" section
Keyword Search
"Section 4.2.1"
✓ Finds exact section reference
Hybrid Search ⭐
Best of both worlds
✓ Handles any query type
Follow-up Questions with Memory
Because we use sessionId and saveToMemory, users can ask follow-up questions:
text
User: What's the refund policy?
AI: According to Section 3.2, customers can request a refund within 30 days...
User: What about after 30 days?
AI: After the 30-day window, as mentioned in Section 3.3, customers can...
User: Can you summarize the key points?
AI: Based on our discussion, here are the key refund policy points:
1. 30-day full refund window
2. After 30 days, store credit only
3. ...Advanced: Custom Search Weights
dynamic-search.ts
// Adjust search mode based on query type
function getSearchConfig(query: string) {
// Technical queries benefit from keyword search
const isTechnical = /section|chapter|article|\d+\.\d+/i.test(query);
// Questions benefit from semantic search
const isQuestion = /^(what|how|why|when|where|who|can|does|is)/i.test(query);
if (isTechnical) {
return { searchMode: 'hybrid', keywordWeight: 0.6 };
} else if (isQuestion) {
return { searchMode: 'hybrid', keywordWeight: 0.2 };
}
return { searchMode: 'hybrid', keywordWeight: 0.3 };
}
// Usage
const config = getSearchConfig(userQuestion);
const response = await client.chat.completions.create({
model: 'openai/gpt-4o-mini',
messages: [{ role: 'user', content: userQuestion }],
useRAG: true,
...config,
});Pro Tip
For large document collections, consider using
ragTopK: 10 to retrieve more context, but be mindful of token limits. The AI will synthesize the most relevant information.Supported File Types
| Format | Extensions | Max Size |
|---|---|---|
| PDF Documents | .pdf | 10 MB |
| Text Files | .txt, .md | 5 MB |
| Word Documents | .doc, .docx | 10 MB |
| JSON Data | .json | 5 MB |