Key Concepts
Understanding the fundamental concepts of Infactory
Key Concepts
Before diving into Infactory, it’s helpful to understand a few key concepts that make up the platform. This guide introduces the core components and how they work together.
The Infactory Platform Architecture
Infactory is built around a simple yet powerful workflow that transforms your structured data into intelligent API endpoints.
Components of Infactory
Workshop
The central workspace where you connect data, build queries, and deploy APIs
Projects
Organizational containers for related data sources and queries
Data Connections
Links to your data sources with automatic schema analysis
Queries
Python-based data processing code that answers specific questions
Slots
Dynamic parameters in queries that make them reusable for similar questions
Deployed APIs
Queries that have been published and made available via API endpoints
How Infactory Works
Understanding Infactory’s approach helps you get the most out of the platform.
1. Data Connection & Analysis
When you connect a data source, Infactory:
- Takes a small sample of your data (about 50 rows)
- Analyzes the schema to understand data types and relationships
- Generates a data model that powers intelligent query generation
- Does not copy or store your actual data
2. Intelligent Query Generation
Based on your data schema, Infactory:
- Automatically generates approximately 12 common query patterns
- Creates queries that can answer a wide range of questions
- Implements “slots” to make queries reusable for similar questions
- Allows you to create custom queries for specific needs
3. Query Generalization Through Slots
Slots are what make Infactory queries so powerful:
A single query with slots can answer hundreds of related questions, without you having to create each variant manually.
4. Direct Execution Without LLMs
Unlike many AI solutions, Infactory queries:
- Execute directly against your database
- Don’t use an LLM for each query execution
- Return consistent, reliable results
- Perform at database speed, not AI inference speed
5. Unified API Gateway
When deployed, your queries become available through:
- Direct endpoints for specific queries with known parameters
- A unified endpoint that routes natural language questions to the right query
The Infactory Difference
Traditional AI-Powered Data Apps
Most approaches to building AI-powered data applications face these challenges:
- Unpredictable Results: LLMs can hallucinate or provide inconsistent answers
- Performance Issues: Every question requires a full LLM processing cycle
- Complex Implementation: Requires extensive prompt engineering and fine-tuning
- Governance Challenges: Difficult to control exactly what can and can’t be answered
The Infactory Approach
Infactory takes a fundamentally different approach:
- Deterministic Results: Queries execute directly against your data, ensuring consistent answers
- High Performance: No LLM inference required for query execution
- Simple Implementation: Auto-generated queries with minimal setup
- Clear Governance: Precise control over what questions can be answered
Key Benefits
- Simplified AI Development: Build AI-powered data applications without complex prompt engineering
- Consistency & Control: Get reliable, consistent answers to data questions
- Speed & Scalability: Process queries at database speed, not AI inference speed
- Security & Compliance: Keep your data secure in your database, with no external data storage
Next Steps
Now that you understand the key concepts behind Infactory, you’re ready to start building. Continue to our Quickstart guide to create your first project, or explore our Core Features to learn more about each component in detail.
Was this page helpful?