Skip to content

Documentation Hub

Welcome to the _codex_ documentation hub. Comprehensive documentation for the ML/AI platform with autonomous agent orchestration.

Last Updated: 2026-03-31


🧠 Cognitive Brain (Start Here for AI Agents)

Unified Navigation System

  • πŸ—ΊοΈ Cognitive Map - Complete architecture, components, flows, dependencies
  • πŸ“Š Dashboard - Live status, current work, blockers, metrics
  • 🎯 Roadmap - Iteration plans, priorities, future scope

Why This Matters

The cognitive brain enables: - Context Continuity: AI agents maintain understanding across sessions - Efficient Navigation: Quick discovery of components, entry points, relationships - Duration-Aware Planning: Maximize work within token/time budgets - Best Path Forward: Always know the next most valuable task - Autonomous Operation: Self-directed agents without constant human guidance


🚨 CI Rescue & Health

  • πŸ”„ CI Rescue Pipeline β€” Golden-path documentation: how workflow failures automatically trigger Copilot sessions. Includes Mermaid flowcharts, sequence diagrams, deduplication state machine, anti-pattern map. (S244 β€” 2026-03-30)
  • πŸ“‹ CI/CD Index β€” All CI failure analysis, fix summaries, and validation reports

Core Documentation

MCP Package System (93+ KB Documentation)

Capability Guides


🧭 Orientation Pillars

Reasoning Pod Deployment

Refer to deployment/reasoning_pod.md and the configs/deploy/reasoning_pod.yaml for dry-run deployment guidance. These assets are designed for offline validation and do not require hosted services.


  • Reasoning templates in the CLI β€” codex reasoning-templates list surfaces curated training/eval bundles. See the codex_cli help for command details.
  • End-to-end quickstart β€” Follow quickstart.md with the +reasoning=baseline overrides highlighted in README_ROOT.md.
  • Evaluation ledger β€” Use guides/reasoning_overview.md to configure NDJSON metrics pipelines.
  • Deployment guardrails β€” Cross-check bespoke model expectations against guides/serving_reproducibility.md.

πŸ“‹ Operational Templates

Operational templates encode recurring delivery rituals so teams can execute migrations, hardening passes, and planning checkpoints with consistent safeguards. Begin with the Operational Templates index to review prerequisites, required metadata, and cross-references before copying a template into your service.

When to Use a Template

  • You are planning a migration or hardening effort that will cross team boundaries.
  • You need an auditable checklist with rollback, communications, and verification steps.
  • You want a consistent structure for maintaining β‰₯85% coverage through scoped test additions.

Handoff Checklist

Each template includes role guidance (developers draft β†’ maintainers execute), [PLACEHOLDER: …] prompts, and success criteria aligned with the coverage standard. Ensure the following before requesting review:

  1. All placeholders are replaced with repo-specific context and linked artifacts.
  2. Rollback and communication steps point to real runbooks or dashboards.
  3. The template is stored alongside the service codebase (usually under docs/) and linked from the change description or PR.

See docs/CONTRIBUTING.md for the full drafting workflow and role expectations.


πŸ“Š Phase 8.7: Universal Intelligence

Complete meta-learning framework with 170 tests.

Components: - Universal Task Interface - Meta-Policy Router (MAML/Reptile) - Abstraction Engine - Grounding Layer - Pattern Store - Safety Monitor - EXP-10 Validation

Quick Example

from github.agents.core.universal_intelligence import UniversalTaskInterface, TaskSpec

spec = TaskSpec(
    environment="gridworld",
    initial_state={"x": 0, "y": 0, "goal": {"x": 5, "y": 5}},
    reward_spec={"id": "reward:v1"},
    termination={"max_steps": 100},
)

uti = UniversalTaskInterface(seed=12345)
result = uti.execute_task(spec)

βš™οΈ Installation

pip install -e .

🚒 Deployment and Operational Expectations

To generate and review a deployment manifest for a bespoke reasoning agent, run a dry-run deploy:

codex deploy \
  --config configs/deploy/reasoning_pod.yaml \
  --model artifacts/runs/reasoning-starter:last \
  --dry-run

This renders the "reasoning pod" manifest for inspection. It does not create or update any live service. See deployment/reasoning_pod.md for what that pod is expected to look like (resources, telemetry, trace capture mode, curriculum phase, etc.).

Rollout Rings

This repository uses staged rollout rings to represent maturity and review state:

  • 0A_base_ / 0B_base_: active development, unstable knobs.
  • 0C_base_: integration of multiple features landing together.
  • 0D_base_: release candidate. Content here should be explainable to Engineering and Product.
  • main: canonical internal "alpha product" surface after approval.

πŸ“ Conventions

  • Keep docs small and composable.
  • Use a single fenced diff block for proposed patches in prompts/guides.
  • Prefer citations to live repo files when referencing code or config.

  • Project audit ritual: see AUDIT_PROMPT.md
  • CHANGELOG practices follow "Keep a Changelog" with an Unreleased section at the top.

Last updated: 2026-02-10