Building Chatbots & Virtual Assistants
Create intelligent chatbots that can answer questions about your data
Building Chatbots with Infactory
Chatbots and virtual assistants powered by Infactory can answer specific questions about your data accurately and efficiently. This guide explains how to build chatbots that leverage Infactory’s data intelligence capabilities.
Why Infactory for Chatbots?
Traditional AI-powered chatbots face several challenges when answering data-specific questions:
Hallucination
LLMs can make up false information when they don’t know the answer
Consistency
Answers to the same question may vary between sessions
Performance
Each query requires full LLM processing, which can be slow
Complexity
Requires complex prompt engineering and fine-tuning
Infactory solves these challenges by:
- Executing queries directly against your data, ensuring accurate answers
- Providing consistent results for the same questions
- Processing queries at database speed, not AI inference speed
- Simplifying development with an easy-to-use API
Architecture Overview
User asks a question
The user enters a natural language question in your chatbot interface.
Question sent to your backend
Your application sends the question to your backend server.
Backend calls Infactory API
Your backend server forwards the question to Infactory’s unified API endpoint.
Infactory processes the question
Infactory selects the appropriate query, extracts parameters, and executes it against your data.
Return structured data response
Infactory returns a structured data response to your backend.
Format and display the answer
Your application formats the structured data into a natural language response and displays it to the user.
Implementation Options
Simple Question-Answer Bot
The simplest implementation directly maps user questions to Infactory responses:
import { useState } from 'react';
function ChatBot() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [isLoading, setIsLoading] = useState(false);
async function handleSubmit(e) {
e.preventDefault();
if (!input.trim()) return;
// Add user message
const userMessage = { role: 'user', content: input };
setMessages(prev => [...prev, userMessage]);
setInput('');
setIsLoading(true);
try {
// Send question to backend
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question: userMessage.content })
});
const data = await response.json();
// Format the response
let botResponse;
if (data.error) {
botResponse = { role: 'assistant', content: `I'm sorry, I couldn't find an answer. ${data.error}` };
} else {
// Create a natural language response from the structured data
botResponse = {
role: 'assistant',
content: formatResponse(data),
data: data.data // Store the raw data for display if needed
};
}
setMessages(prev => [...prev, botResponse]);
} catch (error) {
setMessages(prev => [
...prev,
{ role: 'assistant', content: "I'm sorry, I encountered an error. Please try again." }
]);
} finally {
setIsLoading(false);
}
}
// Format structured data into a natural language response
function formatResponse(data) {
if (!data.data || data.data.length === 0) {
return "I couldn't find any information about that.";
}
// Example formatting for average by category query
if (data.query_used === 'average_by_category') {
const results = data.data.map(item =>
`The average ${data.parameters.metric} for ${item[data.parameters.category]} is ${item.average.toFixed(2)}`
).join('. ');
return results;
}
// Default response if we don't have specific formatting for this query type
return `Here's what I found: ${JSON.stringify(data.data)}`;
}
return (
<div className="chatbot">
<div className="messages">
{messages.map((msg, index) => (
<div key={index} className={`message ${msg.role}`}>
<div className="content">{msg.content}</div>
{msg.data && (
<button
onClick={() => console.log(msg.data)}
className="view-data-btn"
>
View Data
</button>
)}
</div>
))}
{isLoading && (
<div className="message assistant loading">
<div className="typing-indicator">
<span></span><span></span><span></span>
</div>
</div>
)}
</div>
<form onSubmit={handleSubmit} className="input-form">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask a question about your data..."
disabled={isLoading}
/>
<button type="submit" disabled={isLoading}>
Send
</button>
</form>
</div>
);
}
import { useState } from 'react';
function ChatBot() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [isLoading, setIsLoading] = useState(false);
async function handleSubmit(e) {
e.preventDefault();
if (!input.trim()) return;
// Add user message
const userMessage = { role: 'user', content: input };
setMessages(prev => [...prev, userMessage]);
setInput('');
setIsLoading(true);
try {
// Send question to backend
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question: userMessage.content })
});
const data = await response.json();
// Format the response
let botResponse;
if (data.error) {
botResponse = { role: 'assistant', content: `I'm sorry, I couldn't find an answer. ${data.error}` };
} else {
// Create a natural language response from the structured data
botResponse = {
role: 'assistant',
content: formatResponse(data),
data: data.data // Store the raw data for display if needed
};
}
setMessages(prev => [...prev, botResponse]);
} catch (error) {
setMessages(prev => [
...prev,
{ role: 'assistant', content: "I'm sorry, I encountered an error. Please try again." }
]);
} finally {
setIsLoading(false);
}
}
// Format structured data into a natural language response
function formatResponse(data) {
if (!data.data || data.data.length === 0) {
return "I couldn't find any information about that.";
}
// Example formatting for average by category query
if (data.query_used === 'average_by_category') {
const results = data.data.map(item =>
`The average ${data.parameters.metric} for ${item[data.parameters.category]} is ${item.average.toFixed(2)}`
).join('. ');
return results;
}
// Default response if we don't have specific formatting for this query type
return `Here's what I found: ${JSON.stringify(data.data)}`;
}
return (
<div className="chatbot">
<div className="messages">
{messages.map((msg, index) => (
<div key={index} className={`message ${msg.role}`}>
<div className="content">{msg.content}</div>
{msg.data && (
<button
onClick={() => console.log(msg.data)}
className="view-data-btn"
>
View Data
</button>
)}
</div>
))}
{isLoading && (
<div className="message assistant loading">
<div className="typing-indicator">
<span></span><span></span><span></span>
</div>
</div>
)}
</div>
<form onSubmit={handleSubmit} className="input-form">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask a question about your data..."
disabled={isLoading}
/>
<button type="submit" disabled={isLoading}>
Send
</button>
</form>
</div>
);
}
const express = require('express');
const axios = require('axios');
const router = express.Router();
router.post('/api/chat', async (req, res) => {
try {
const { question } = req.body;
// Call Infactory API
const response = await axios.post(
'https://api.infactory.ai/v1/query',
{
query: question,
project_id: process.env.INFACTORY_PROJECT_ID,
},
{
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.INFACTORY_API_KEY}`
}
}
);
res.json(response.data);
} catch (error) {
console.error('Error querying Infactory:', error.response?.data || error.message);
let errorMessage = 'Failed to query the database';
// Handle specific error types
if (error.response?.data?.error === 'no_matching_query') {
errorMessage = "I don't know how to answer that question yet.";
}
res.status(500).json({ error: errorMessage });
}
});
module.exports = router;
Hybrid LLM + Infactory Chatbot
For a more conversational experience, you can combine Infactory with an LLM:
In this approach:
- The LLM handles conversation flow and identifies when data questions are asked
- Infactory answers specific data questions
- The LLM formats the responses naturally
const express = require('express');
const axios = require('axios');
const { OpenAI } = require('openai');
const router = express.Router();
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
router.post('/api/chat', async (req, res) => {
try {
const { question, history } = req.body;
// Step 1: Have the LLM determine if this is a data question
const analysisPrompt = `
You are an assistant that helps determine if a question requires database querying.
User question: "${question}"
Is this a question that requires querying a database to answer correctly?
Respond with YES if it's asking for specific data, statistics, or information that would be stored in a database.
Respond with NO if it's a general question, chitchat, or something that doesn't require looking up specific data.
Just respond with YES or NO.`;
const analysisResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: analysisPrompt }],
temperature: 0,
});
const isDataQuestion = analysisResponse.choices[0].message.content.trim().startsWith('YES');
let responseData;
if (isDataQuestion) {
// Step 2: If it's a data question, query Infactory
const infactoryResponse = await axios.post(
'https://api.infactory.ai/v1/query',
{
query: question,
project_id: process.env.INFACTORY_PROJECT_ID,
},
{
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.INFACTORY_API_KEY}`
}
}
);
// Step 3: Have the LLM format the response
const formattingPrompt = `
You are a helpful assistant. Format the following data into a natural, conversational response.
User question: "${question}"
Data: ${JSON.stringify(infactoryResponse.data)}
Provide a conversational, easy to understand answer based strictly on this data.`;
const formattingResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: formattingPrompt }],
temperature: 0.7,
});
responseData = {
content: formattingResponse.choices[0].message.content,
data: infactoryResponse.data.data,
source: 'Infactory'
};
} else {
// Step 4: If it's not a data question, let the LLM handle it
const chatMessages = [
{ role: "system", content: "You are a helpful assistant that works with a database system. For data-specific questions, those are handled by another system. Focus on being helpful for general questions, clarifications, and conversation." },
...history.map(msg => ({ role: msg.role, content: msg.content })),
{ role: "user", content: question }
];
const chatResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: chatMessages,
temperature: 0.7,
});
responseData = {
content: chatResponse.choices[0].message.content,
source: 'llm'
};
}
res.json(responseData);
} catch (error) {
console.error('Chatbot error:', error);
res.status(500).json({ error: 'Failed to process your request' });
}
});
module.exports = router;
const express = require('express');
const axios = require('axios');
const { OpenAI } = require('openai');
const router = express.Router();
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
router.post('/api/chat', async (req, res) => {
try {
const { question, history } = req.body;
// Step 1: Have the LLM determine if this is a data question
const analysisPrompt = `
You are an assistant that helps determine if a question requires database querying.
User question: "${question}"
Is this a question that requires querying a database to answer correctly?
Respond with YES if it's asking for specific data, statistics, or information that would be stored in a database.
Respond with NO if it's a general question, chitchat, or something that doesn't require looking up specific data.
Just respond with YES or NO.`;
const analysisResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: analysisPrompt }],
temperature: 0,
});
const isDataQuestion = analysisResponse.choices[0].message.content.trim().startsWith('YES');
let responseData;
if (isDataQuestion) {
// Step 2: If it's a data question, query Infactory
const infactoryResponse = await axios.post(
'https://api.infactory.ai/v1/query',
{
query: question,
project_id: process.env.INFACTORY_PROJECT_ID,
},
{
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.INFACTORY_API_KEY}`
}
}
);
// Step 3: Have the LLM format the response
const formattingPrompt = `
You are a helpful assistant. Format the following data into a natural, conversational response.
User question: "${question}"
Data: ${JSON.stringify(infactoryResponse.data)}
Provide a conversational, easy to understand answer based strictly on this data.`;
const formattingResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: formattingPrompt }],
temperature: 0.7,
});
responseData = {
content: formattingResponse.choices[0].message.content,
data: infactoryResponse.data.data,
source: 'Infactory'
};
} else {
// Step 4: If it's not a data question, let the LLM handle it
const chatMessages = [
{ role: "system", content: "You are a helpful assistant that works with a database system. For data-specific questions, those are handled by another system. Focus on being helpful for general questions, clarifications, and conversation." },
...history.map(msg => ({ role: msg.role, content: msg.content })),
{ role: "user", content: question }
];
const chatResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: chatMessages,
temperature: 0.7,
});
responseData = {
content: chatResponse.choices[0].message.content,
source: 'llm'
};
}
res.json(responseData);
} catch (error) {
console.error('Chatbot error:', error);
res.status(500).json({ error: 'Failed to process your request' });
}
});
module.exports = router;
Enhancing Your Chatbot
Support for Follow-up Questions
To handle follow-up questions, maintain conversation context and reference previous responses:
// Track the last query and its parameters
let lastQuery = null;
// Example follow-up handler
function handleFollowup(question, history) {
const lastMessage = history[history.length - 2]; // Get the last assistant message
if (lastMessage && lastMessage.data && question.toLowerCase().includes('what about')) {
// This looks like a follow-up question
if (lastMessage.queryInfo.query_used === 'average_by_category') {
// Extract what they're asking about
const match = question.match(/what about (the |)([a-z0-9 ]+)/i);
if (match) {
const newCategory = match[2].trim();
// Update the parameters from the last query
return {
isFollowup: true,
queryName: lastMessage.queryInfo.query_used,
parameters: {
...lastMessage.queryInfo.parameters,
category: newCategory
}
};
}
}
}
return { isFollowup: false };
}
Adding Visualizations
Enhance your chatbot with visual representations of data:
import { useState, useRef } from 'react';
import { Bar, Line, Pie } from 'react-chartjs-2';
import { Chart as ChartJS, CategoryScale, LinearScale, BarElement, PointElement, LineElement, ArcElement, Title, Tooltip, Legend } from 'chart.js';
// Register Chart.js components
ChartJS.register(CategoryScale, LinearScale, BarElement, PointElement, LineElement, ArcElement, Title, Tooltip, Legend);
function ChatBotMessage({ message }) {
const [showChart, setShowChart] = useState(false);
// Only process data visualizations for assistant messages with data
if (message.role !== 'assistant' || !message.data) {
return (
<div className={`message ${message.role}`}>
<div className="content">{message.content}</div>
</div>
);
}
// Determine what type of chart to show based on the data
const getChartType = () => {
if (!message.data || !Array.isArray(message.data)) return null;
// Example: For data that has categories and a single metric, use a bar chart
if (message.data.length > 1 && message.data.length <= 10) {
const firstItem = message.data[0];
const keys = Object.keys(firstItem);
if (keys.length === 2 && typeof firstItem[keys[1]] === 'number') {
// Likely a category and a value - good for a bar chart
return 'bar';
} else if (keys.includes('date') || keys.includes('time') || keys.some(k => k.includes('date') || k.includes('time'))) {
// Time series data - good for a line chart
return 'line';
} else if (message.data.length <= 6) {
// Small number of categories - could be good for a pie chart
return 'pie';
}
}
return 'bar'; // Default to bar chart
};
const chartType = getChartType();
// Prepare chart data based on the data structure
const prepareChartData = () => {
if (!message.data || !Array.isArray(message.data) || message.data.length === 0) return null;
const firstItem = message.data[0];
const keys = Object.keys(firstItem);
// For a simple two-column data structure (category and value)
if (keys.length === 2 && typeof firstItem[keys[1]] === 'number') {
const categoryKey = keys[0];
const valueKey = keys[1];
return {
labels: message.data.map(item => item[categoryKey]),
datasets: [
{
label: valueKey.charAt(0).toUpperCase() + valueKey.slice(1),
data: message.data.map(item => item[valueKey]),
backgroundColor: 'rgba(53, 162, 235, 0.5)',
borderColor: 'rgba(53, 162, 235, 1)',
borderWidth: 1
}
]
};
}
// Add more data preparation logic for other data structures
return null;
};
const chartData = prepareChartData();
// Chart options
const chartOptions = {
responsive: true,
plugins: {
legend: {
position: 'top',
},
title: {
display: true,
text: 'Data Visualization',
},
},
};
// Render the appropriate chart component based on chartType
const renderChart = () => {
if (!chartData) return null;
switch (chartType) {
case 'bar':
return <Bar data={chartData} options={chartOptions} />;
case 'line':
return <Line data={chartData} options={chartOptions} />;
case 'pie':
return <Pie data={chartData} options={chartOptions} />;
default:
return null;
}
};
return (
<div className={`message ${message.role}`}>
<div className="content">{message.content}</div>
{chartType && chartData && (
<div className="message-actions">
<button onClick={() => setShowChart(!showChart)}>
{showChart ? 'Hide Visualization' : 'Show Visualization'}
</button>
{showChart && (
<div className="chart-container">
{renderChart()}
</div>
)}
</div>
)}
<div className="message-footer">
<small>Source: Infactory data query</small>
</div>
</div>
);
}
import { useState, useRef } from 'react';
import { Bar, Line, Pie } from 'react-chartjs-2';
import { Chart as ChartJS, CategoryScale, LinearScale, BarElement, PointElement, LineElement, ArcElement, Title, Tooltip, Legend } from 'chart.js';
// Register Chart.js components
ChartJS.register(CategoryScale, LinearScale, BarElement, PointElement, LineElement, ArcElement, Title, Tooltip, Legend);
function ChatBotMessage({ message }) {
const [showChart, setShowChart] = useState(false);
// Only process data visualizations for assistant messages with data
if (message.role !== 'assistant' || !message.data) {
return (
<div className={`message ${message.role}`}>
<div className="content">{message.content}</div>
</div>
);
}
// Determine what type of chart to show based on the data
const getChartType = () => {
if (!message.data || !Array.isArray(message.data)) return null;
// Example: For data that has categories and a single metric, use a bar chart
if (message.data.length > 1 && message.data.length <= 10) {
const firstItem = message.data[0];
const keys = Object.keys(firstItem);
if (keys.length === 2 && typeof firstItem[keys[1]] === 'number') {
// Likely a category and a value - good for a bar chart
return 'bar';
} else if (keys.includes('date') || keys.includes('time') || keys.some(k => k.includes('date') || k.includes('time'))) {
// Time series data - good for a line chart
return 'line';
} else if (message.data.length <= 6) {
// Small number of categories - could be good for a pie chart
return 'pie';
}
}
return 'bar'; // Default to bar chart
};
const chartType = getChartType();
// Prepare chart data based on the data structure
const prepareChartData = () => {
if (!message.data || !Array.isArray(message.data) || message.data.length === 0) return null;
const firstItem = message.data[0];
const keys = Object.keys(firstItem);
// For a simple two-column data structure (category and value)
if (keys.length === 2 && typeof firstItem[keys[1]] === 'number') {
const categoryKey = keys[0];
const valueKey = keys[1];
return {
labels: message.data.map(item => item[categoryKey]),
datasets: [
{
label: valueKey.charAt(0).toUpperCase() + valueKey.slice(1),
data: message.data.map(item => item[valueKey]),
backgroundColor: 'rgba(53, 162, 235, 0.5)',
borderColor: 'rgba(53, 162, 235, 1)',
borderWidth: 1
}
]
};
}
// Add more data preparation logic for other data structures
return null;
};
const chartData = prepareChartData();
// Chart options
const chartOptions = {
responsive: true,
plugins: {
legend: {
position: 'top',
},
title: {
display: true,
text: 'Data Visualization',
},
},
};
// Render the appropriate chart component based on chartType
const renderChart = () => {
if (!chartData) return null;
switch (chartType) {
case 'bar':
return <Bar data={chartData} options={chartOptions} />;
case 'line':
return <Line data={chartData} options={chartOptions} />;
case 'pie':
return <Pie data={chartData} options={chartOptions} />;
default:
return null;
}
};
return (
<div className={`message ${message.role}`}>
<div className="content">{message.content}</div>
{chartType && chartData && (
<div className="message-actions">
<button onClick={() => setShowChart(!showChart)}>
{showChart ? 'Hide Visualization' : 'Show Visualization'}
</button>
{showChart && (
<div className="chart-container">
{renderChart()}
</div>
)}
</div>
)}
<div className="message-footer">
<small>Source: Infactory data query</small>
</div>
</div>
);
}
Multi-turn Context Awareness
To handle complex conversations with multiple turns:
// Conversation state manager
class ConversationState {
constructor() {
this.topics = new Map(); // Track topics mentioned in the conversation
this.lastQuery = null; // Last executed query
this.lastResponse = null; // Last response data
this.context = {}; // Additional context like filters applied
}
updateFromMessage(message, isUser) {
if (isUser) {
// Process user message to extract topics, entities, etc.
this.extractTopics(message);
} else {
// Update based on system response
if (message.queryInfo) {
this.lastQuery = message.queryInfo.query_used;
this.lastResponse = message.data;
// Update context based on the parameters used
if (message.queryInfo.parameters) {
Object.entries(message.queryInfo.parameters).forEach(([key, value]) => {
this.context[key] = value;
});
}
}
}
}
extractTopics(message) {
// Simple implementation - in production you might use NLP services
const lowerMessage = message.toLowerCase();
// Check for key topics relevant to your data domain
const potentialTopics = ['sales', 'revenue', 'customers', 'products', 'regions'];
potentialTopics.forEach(topic => {
if (lowerMessage.includes(topic)) {
// Increase the relevance score for this topic
const currentScore = this.topics.get(topic) || 0;
this.topics.set(topic, currentScore + 1);
}
});
}
getRelevantContext(message) {
// Determine what context is relevant to the current message
const context = { ...this.context };
// Decay or remove irrelevant context based on the new message
// This is highly domain-specific
return context;
}
}
Deployment Considerations
Scaling Your Chatbot
As usage grows, consider these scaling strategies:
Load Balancing
Distribute API calls across multiple backend instances
Caching
Cache common queries to reduce API calls and improve response times
Queue Processing
Use a message queue for handling high volumes of requests
Serverless Functions
Deploy as serverless functions that scale automatically with demand
Monitoring and Analytics
Track your chatbot’s performance:
- Usage Metrics: Number of questions, unique users, peak usage times
- Performance Metrics: Response times, API call success rates
- Content Metrics: Most common questions, unanswered questions
- User Satisfaction Metrics: Feedback ratings, abandonment rates
Best Practices
Clear Scope
Define what types of questions your chatbot should answer
Fallback Strategies
Have graceful responses for questions outside your query coverage
Progressive Enhancement
Start simple and add complexity as you learn user patterns
Continuous Improvement
Regularly analyze unanswerable questions to identify new queries to create
User Feedback Loop
Allow users to flag incorrect or unhelpful answers
Test with Real Users
Validate with actual users rather than just hypothetical questions
Common Challenges and Solutions
Challenge: Users ask questions that could match multiple queries or have unclear parameters.
Solution: Implement a clarification flow where the chatbot asks follow-up questions when a question is ambiguous. For example:
if (matchingQueries.length > 1 && matchingQueries[0].confidence < 0.8) {
return {
needsClarification: true,
clarificationQuestion: `I'm not sure exactly what you're asking about. Are you interested in ${matchingQueries.map(q => q.topic).join(' or ')}?`,
matchingQueries: matchingQueries
};
}
Challenge: Keeping track of context across multiple turns of conversation.
Solution: Implement a context management system that tracks entities, topics, and previous queries. Use this context to enhance subsequent queries.
// Update context based on the current query
function updateContext(context, query, parameters) {
return {
...context,
lastQuery: query,
entities: {
...context.entities,
...extractEntities(parameters)
},
timestamp: Date.now()
};
}
// Apply context to a new question
function applyContext(question, context) {
// If the question includes pronouns like "it", "them", "those",
// try to resolve using context
if (/\b(it|them|those|that|these)\b/i.test(question)) {
if (context.entities.category) {
// Replace pronouns with the actual entity from context
return question.replace(
/\b(it|them|those|that|these)\b/i,
context.entities.category
);
}
}
return question;
}
Challenge: Users expect the chatbot to answer any question, regardless of whether the data supports it.
Solution: Set clear expectations and provide informative responses about the chatbot’s capabilities:
function handleUnanswerable(question) {
// Check if the question is about a topic we don't have data for
const unsupportedTopics = ['employee salaries', 'future predictions', 'competitor data'];
for (const topic of unsupportedTopics) {
if (question.toLowerCase().includes(topic)) {
return `I don't have information about ${topic}. I can help you with questions about our sales, customers, products, and regional performance.`;
}
}
// General fallback
return "I don't have enough information to answer that question. I can help you with questions about our business data such as sales figures, customer metrics, and product performance.";
}
Example Use Cases
Customer Service Bot
Answer questions about order status, shipping times, and product availability
Internal Analytics Assistant
Help employees explore business metrics without needing SQL knowledge
Sales Dashboard Companion
Provide conversational access to sales performance data
Product Recommendation Bot
Recommend products based on customer data and preferences
Next Steps
After building your chatbot, consider:
- Implementing user feedback mechanisms to improve your queries
- Adding authentication and authorization for internal chatbots
- Exploring other integration patterns like dashboards or search interfaces
- Setting up performance monitoring to ensure responsiveness
Was this page helpful?
- Building Chatbots with Infactory
- Why Infactory for Chatbots?
- Architecture Overview
- Implementation Options
- Simple Question-Answer Bot
- Hybrid LLM + Infactory Chatbot
- Enhancing Your Chatbot
- Support for Follow-up Questions
- Adding Visualizations
- Multi-turn Context Awareness
- Deployment Considerations
- Scaling Your Chatbot
- Monitoring and Analytics
- Best Practices
- Common Challenges and Solutions
- Example Use Cases
- Next Steps