Quickstart
From zero to querying your data in 5 minutes.
1. Start the graph database
docker compose up -dThis starts Apache Fuseki on port 3030. Data persists in a Docker volume.
2. Install Cograph
python -m venv .venv
source .venv/bin/activate
pip install -e .3. Configure
cp .env.example .env
# Edit .env and add your OpenRouter API key:
# OPENROUTER_API_KEY=sk-or-...4. Start the API server
source .env
uvicorn omnix.api.app:create_app --factory --port 80005. Ingest a CSV
omnix ingest your-data.csv --kg my-datasetOne LLM call infers the schema from your column headers and sample rows. All rows are then mapped deterministically. No LLM per row.
6. Ask questions
omnix ask "How many records are there?" --kg my-dataset
omnix ask "Which category has the highest average price?" --kg my-datasetWhat happens under the hood
Ingestion: The CSV columns are analyzed by an LLM to determine which are attributes (numbers, dates, booleans) and which are relationships to other entities (cities, companies, categories). A typed ontology is created automatically. Each row becomes an entity with typed triples stored in the graph database.
Querying: Your natural language question is translated to SPARQL using the ontology as context. The query runs against the graph database and results are formatted as a human-readable answer.
Next steps
- API Reference — use the REST API directly
- MCP Server — connect to Claude, Cursor, or other AI agents