Architecture
Understand the Blazing Core Compose architecture and design principles
Blazing Core Compose uses a thin client, smart server architecture that separates validation from orchestration for maximum reliability and consistency.
Design Principles
Server-Side Intelligence
The Core Compose compiler uses server-side intelligence for infrastructure decisions. This design choice provides several critical benefits:
- Crash Resilience: CLI survives crashes while server continues orchestration
- Consistency: Same placement algorithm for all clients (CLI, Web UI, mobile)
- Real-Time Data: Server has current availability, pricing, and quota information
- Atomic Operations: All resources created or none (no partial deployments)
Architecture Layers
Client Layer (Thin CLI)
The CLI is responsible for:
What the CLI Does NOT Do:
- Make infrastructure placement decisions
- Track provider availability
- Handle resource provisioning
- Manage state
Server Layer (Control Plane API)
The Control Plane API orchestrates all infrastructure:
Provider Layer
Provider compilers translate Core Compose specifications into native formats:
- DFC Compiler: Generates Akash SDL + Gateway configs
- AWS Compiler: CloudFormation/Terraform for EKS
- GCP Compiler: GKE manifests + Cloud resources
- Azure Compiler: AKS + ARM templates
Deployment Flow
Phase 1: Local Validation (Client)
The CLI performs fast local checks:
- Schema Validation: YAML structure and types
- Feature Checks: Verify requested features are available
- Control Checks: Ensure compliance requirements can be met
- Policy Validation: Org-level policy conformance
Time: 50-200ms Result: Immediate feedback on configuration issues
Phase 2: Plan Intent (Client)
The CLI shows deployment intent (what will happen):
- Services to be deployed
- Resource requirements
- Provider preferences
- Compliance controls
- Estimated cost range
Important: Actual region selection happens server-side during apply.
Phase 3: Orchestration (Server)
The server executes the deployment:
3.1 Placement Algorithm
3.2 Resource Provisioning
- Virtual Allocation: Reserve IPs, GPUs, storage
- Infrastructure Creation: Provision compute nodes
- Network Setup: Configure ingress/egress
- Service Deployment: Deploy containers
- Verification: Health checks and readiness
All operations are atomic: if any step fails, entire deployment rolls back.
3.3 Response
Server returns:
State Management
State Registries
| Registry | Purpose | Backend | |----------|---------|---------| | IPLease | Tracks leased IPs and timers | etcd / Consul | | CertRegistry | TLS certificates & rotation | Vault | | FeatureRegistry | features.yaml snapshot | Git | | ControlRegistry | controls.yaml snapshot | Git | | PlanCache | Last effective plan | Redis |
Audit Trail
Every deployment creates an immutable audit record:
Component Interaction
Benefits of This Architecture
1. Reliability
- CLI crashes don't affect deployments
- Server continues orchestration independently
- Retries and error handling server-side
2. Consistency
- Same placement algorithm for all clients
- No version skew between CLI and logic
- Centralized decision-making
3. Real-Time Optimization
- Server sees current provider pricing
- Live capacity and availability data
- Dynamic cost optimization
4. Security
- Credentials never leave server
- Audit trail for all operations
- Centralized access control
Next Steps
- Learn about Provider Configuration
- Understand Deployment Lifecycle
- Explore Governance Layer