Blazing iconBlazing

Architecture

Edit on GitHub

Understand the Blazing Core Compose architecture and design principles

Blazing Core Compose uses a thin client, smart server architecture that separates validation from orchestration for maximum reliability and consistency.

Design Principles

Server-Side Intelligence

The Core Compose compiler uses server-side intelligence for infrastructure decisions. This design choice provides several critical benefits:

  1. Crash Resilience: CLI survives crashes while server continues orchestration
  2. Consistency: Same placement algorithm for all clients (CLI, Web UI, mobile)
  3. Real-Time Data: Server has current availability, pricing, and quota information
  4. Atomic Operations: All resources created or none (no partial deployments)

Architecture Layers

Client Layer (Thin CLI)

The CLI is responsible for:

YAML

What the CLI Does NOT Do:

  • Make infrastructure placement decisions
  • Track provider availability
  • Handle resource provisioning
  • Manage state

Server Layer (Control Plane API)

The Control Plane API orchestrates all infrastructure:

YAML

Provider Layer

Provider compilers translate Core Compose specifications into native formats:

  • DFC Compiler: Generates Akash SDL + Gateway configs
  • AWS Compiler: CloudFormation/Terraform for EKS
  • GCP Compiler: GKE manifests + Cloud resources
  • Azure Compiler: AKS + ARM templates

Deployment Flow

Phase 1: Local Validation (Client)

Bash

The CLI performs fast local checks:

  1. Schema Validation: YAML structure and types
  2. Feature Checks: Verify requested features are available
  3. Control Checks: Ensure compliance requirements can be met
  4. Policy Validation: Org-level policy conformance

Time: 50-200ms Result: Immediate feedback on configuration issues

Phase 2: Plan Intent (Client)

Bash

The CLI shows deployment intent (what will happen):

  • Services to be deployed
  • Resource requirements
  • Provider preferences
  • Compliance controls
  • Estimated cost range

Important: Actual region selection happens server-side during apply.

Phase 3: Orchestration (Server)

Bash

The server executes the deployment:

3.1 Placement Algorithm

Python

3.2 Resource Provisioning

  1. Virtual Allocation: Reserve IPs, GPUs, storage
  2. Infrastructure Creation: Provision compute nodes
  3. Network Setup: Configure ingress/egress
  4. Service Deployment: Deploy containers
  5. Verification: Health checks and readiness

All operations are atomic: if any step fails, entire deployment rolls back.

3.3 Response

Server returns:

JSON

State Management

State Registries

| Registry | Purpose | Backend | |----------|---------|---------| | IPLease | Tracks leased IPs and timers | etcd / Consul | | CertRegistry | TLS certificates & rotation | Vault | | FeatureRegistry | features.yaml snapshot | Git | | ControlRegistry | controls.yaml snapshot | Git | | PlanCache | Last effective plan | Redis |

Audit Trail

Every deployment creates an immutable audit record:

JSON

Component Interaction

Plain Text

Benefits of This Architecture

1. Reliability

  • CLI crashes don't affect deployments
  • Server continues orchestration independently
  • Retries and error handling server-side

2. Consistency

  • Same placement algorithm for all clients
  • No version skew between CLI and logic
  • Centralized decision-making

3. Real-Time Optimization

  • Server sees current provider pricing
  • Live capacity and availability data
  • Dynamic cost optimization

4. Security

  • Credentials never leave server
  • Audit trail for all operations
  • Centralized access control

Next Steps