The Ozgar AI platform is architecturally designed for both on-premises and cloud deployments, encapsulating its core functionality within containerized microservices for modularity and isolation.
Connectors ingest system objects, source code, and database catalogs into an Ingestion Service that normalizes, annotates, and enriches every artifact into three complementary stores:
- A Knowledge Graph of entities and relationships
- A Semantic Vector Index for retrieval-augmented generation (RAG)
- A Full-Text Index for traditional keyword search
A contextual retrieval layer selects the most relevant fragments, which feed into the AI Reasoning Engine. This engine powers key services:
- AI-Driven Chatbot (“Ask Ozgar”) for natural-language queries across code and data
- Automated Documentation that refreshes with every code or schema update
- Interactive Visualizations (functionality areas, ER diagrams, program call-graphs, state flow diagrams, screen flow diagrams)
Large Language Model (LLM) instances run separately — on-premise or in the cloud — to deliver high throughput, low latency, and enterprise-grade security.