API Reference

G6Solver turns LLMs into reliable problem solvers. This guide helps you integrate, configure, and tailor G6 to your needs.

Skip to API endpoints

Introduction


N.B. This guide is intended for developers or superusers who wish to directly access the underlying infrastructure which powers G6 for personal use. If you are a developer wishing to incorporate G6 into your own products and on-sell with your own customised SLA/white label please contact us.


The G6 desktop application is written as a local server which is intended to provide modularity and expose endpoints for programmatic access to the underlying core solving engine. The user interface is a desktop application run in browser using a separate server which communicates with the AI backend. This design ensures modularity, separation of concerns and allows the UI to have better stability as well as technical support. This also allows us to outsource the upgrading and maintainance of the frontend over time as we are not experts in frontend design. To find specific ports running on your device please go to your desktop application > settings > developers.

System Architecture

G6Solver's desktop application features a sophisticated dual-agent architecture that separates work execution from user interaction, enabling efficient and intuitive AI collaboration.

Dual-Agent Architecture

User Interface Layer

Chat Agent
User interaction interface
Real-time Communication Progress Monitoring User Feedback

Processing Layer

AI Agent
Work-performing component
Task Execution Problem Solving Code Generation

Architecturally speaking, there are three main components: the Memory System β€” stored locally; the Reasoning Engine β€” run locally and via an external server; and the API Integration β€” completely via an external server. This provides the main physical datalink information about how information is stored and processed in G6.

Memory System
Persistent Storage Context Retention Learning History
Reasoning Engine
Hybrid Logic Program Synthesis Quality Assessment
API Integration
OpenRouter Local Ollama Backend Selection

Component Roles & Interaction

Understanding how G6Solver's dual-agent architecture enables seamless human-AI collaboration

AI Agent

Work-Performing Component

The AI Agent is the core work-performing component of G6Solver's desktop application. It operates as an autonomous cognitive system that executes complex tasks, solves problems, and generates solutions with minimal human intervention.

Primary Responsibilities
  • Autonomous Task Execution Performs complex computational tasks and problem-solving operations independently
  • Advanced Reasoning Applies hybrid reasoning combining symbolic logic with neural approaches
  • Code Generation & Synthesis Dynamically generates, executes, and refines code to solve complex problems
  • Quality Assessment Evaluates output quality with mathematical precision and confidence metrics
  • Continuous Learning Adapts and improves performance based on experience and feedback

Chat Agent

User Interaction Interface

The Chat Agent serves as the intelligent interface between users and the AI Agent's work. It manages communication, provides real-time updates, and ensures users stay informed about the AI Agent's progress and results.

Primary Responsibilities
  • Real-time Communication Facilitates seamless conversation between users and the AI system
  • Progress Monitoring Provides live updates on AI Agent's task execution and completion status
  • Feedback Collection Gathers user input and preferences to guide AI Agent behavior
  • Result Presentation Formats and presents AI Agent outputs in user-friendly formats
  • Configuration Management Allows users to adjust system settings and operational parameters

How the Agents Work Together

Although, have two simulatenously operating agents creates some technical complexity associated with ensuring memory safety, we ensure safety with true multiprocessing execution and ACID compliant operations. The dual-agent architecture creates a powerful synergy where specialized components work in harmony to deliver exceptional AI-human collaboration experiences.

1
User Input Processing

Chat Agent receives and interprets user requests, translating them into actionable tasks for the AI Agent

2
Task Execution

AI Agent performs the actual work while Chat Agent monitors progress and provides real-time updates

3
Result Delivery

Chat Agent formats and presents AI Agent's results, enabling user feedback and iteration

Benefits of Separation
Specialized Focus Each agent optimizes for its specific role without compromise
Enhanced Performance Parallel processing enables faster response times and better resource utilization
Improved Maintainability Modular design allows independent updates and optimizations
Better User Experience Dedicated interface agent ensures consistent and intuitive interactions

Output Quality & Metrics


Every G6Solver output includes comprehensive quality metrics to ensure transparency, reliability, and trust in AI-generated results.

Transparent Quality Assurance

G6Solver provides rigorous mathematical foundations with comprehensive telemetry and quality metrics. Each system output includes detailed quality assessments to help you make informed decisions.

Our quality metrics system evaluates four key dimensions of output quality, combining them into a composite utility score for easy interpretation.

  • 🎯 Real-time quality assessment
  • πŸ“Š Mathematical precision scoring
  • πŸ” Transparent decision-making
  • ⚑ Continuous quality monitoring

Demo Version

Basic Quality Metrics

  • βœ“ Basic accuracy scoring
  • βœ“ Confidence ratings
  • βœ“ Simple quality indicators
  • βœ“ Real-time assessment

Desktop Version

Advanced Quality Analytics

  • βœ“ Comprehensive 4-metric analysis
  • βœ“ Historical quality tracking
  • βœ“ Quality trend analysis
  • βœ“ Custom quality thresholds
  • βœ“ Quality-based filtering
  • βœ“ Detailed quality reports

API Overview


The G6Solver desktop application runs a comprehensive FastAPI server locally on your machine. This API ecosystem provides complete control over AI agent management, token tracking, security, and device registration.

All services communicate through well-defined REST endpoints with robust authentication and error handling. Comprehensive documentation is available by running the G6 desktop application and directly accessing the localhost port (see system > settings > developers) and then navigating to this page. From here you can view comprehensive documentation and directly access the API using a GUI via the browser window using SwaggerUI by navigating to the /docs endpoint.


The API may be subject to change over time. We will provide updates on this page regarding stable changes and versioning however at this point in time, we intend to try to keep these core API endpoints as far as possible. We have included a brief overview of FastAPI for developers to provide context on what FastAPI is and how it works.

Key Features

  • Project & Session Management: Create projects, upload resources, manage AI agent sessions
  • Token Tracking: Real-time monitoring of AI token usage and cost management
  • API Key Security: Argon2-based key management with rate limiting and lockout protection
  • Device Registration: Hardware fingerprinting for license compliance and security
  • Local Development: Complete local server with configurable ports and offline capability

Endpoints

πŸ”Ή Project & Session
  • POST /projects β†’ Create project
  • GET /projects β†’ List projects
  • POST /projects/{id}/resources β†’ Upload resource
  • POST /sessions β†’ Start session

πŸ”Ή Token Tracking
  • GET /sessions/{id}/usage β†’ Check usage
  • POST /usage/report β†’ Report tokens

πŸ”Ή API Key Security
  • POST /apikey/validate β†’ Validate API key (Argon2 + lockout)

πŸ”Ή Device Registration
  • POST /devices/register β†’ Register device fingerprint
  • GET /devices/{user_id} β†’ List devices

πŸ”Ή Local Development
  • GET /config β†’ Get server config
  • POST /config β†’ Update config

πŸ”Ή Chat Agent
  • POST /chat/send β†’ Send message
  • POST /chat/sendfile β†’ Send file
  • GET /chat/{id}/history β†’ Get chat history
  • GET /chat/{id}/receive β†’ Poll agent reply

πŸ”Ή Worker Agents
  • POST /workers/start β†’ Start worker agent
  • POST /workers/stop β†’ Stop worker agent
  • GET /workers/{id} β†’ List workers for a session

FastAPI for Developers

For developers who are not familiar with FastAPI or the associated ecosystem, we've provided a brief overview of its main features to help with understanding how the G6 API endpoints work and how to interface with them.

FastAPI is a modern, high-performance web framework for building APIs with Python. It is designed with ease of use, speed, and robustness in mind. Here's why developers gravitate toward FastAPI:

  • Type Hints: FastAPI uses Python type hints for data validation, meaning you get automatic, robust data checking, clear documentation, and editor support (like autocompletion).
  • Performance: It's built on top of Starlette and Pydantic, and it's one of the fastest Python web frameworks available.
  • Asynchronous: Async-first design lets you build highly concurrent APIs out of the box.
  • Automatic Documentation: Every FastAPI project gets interactive API docs by default, powered by OpenAPI (SwaggerUI).

Example: A Minimal FastAPI App

from fastapi import FastAPI

app = FastAPI()

@app.get("/")
async def read_root():
    return {"message": "Hello World"}

You run this with a server such as Uvicorn, for example:

uvicorn main:app --reload

  • The API will be live at: http://127.0.0.1:8000
  • Docs automatically available at: http://127.0.0.1:8000/docs

Using SwaggerUI with FastAPI

The G6Solver API uses SwaggerUI to provide a comprehensive documentation and testing interface for the API endpoints. This is available by running the G6Solver desktop application and directly accessing the localhost port (see system > settings > developers) and then navigating to this page. From here you can view comprehensive documentation and directly access the API using a GUI via the browser window using SwaggerUI by navigating to the /docs endpoint.

SwaggerUI is an interactive web interface that lets you view, test, and interact with your API endpoints in real-time, directly from the browser. FastAPI automatically provides SwaggerUI at /docs for any app you build.

We strongly recommend for developers to build on top of our API using FastAPI/SwaggerUI.

This is especially so if you intend to share your code/open source your code.

Here is what this means in practice:

  • Self-Documenting APIs: As soon as you write your endpoints (using Python type hints), FastAPI creates an OpenAPI (Swagger) spec. This is used to generate interactive documentation.
  • Interactive Testing: You (or anyone using your API) can execute requests, try parameters, and see real-time responses via a web browserβ€”making development, debugging, and onboarding easier.
  • Always Up to Date: Your documentation always matches your codeβ€”no manual doc sync needed.


Summary Table

Feature FastAPI SwaggerUI in FastAPI
Main use Web/API backend framework Interactive API docs/testing
How integrated Native (just define endpoints) /docs endpoint is auto-generated
Benefits Type-safe, async, very fast Try every API call in browser, see docs