Skip to main content

Your First Fractal Cloud Deployment with SDK

Target Audience: DevOps Engineers, Platform Architects, and Developers.
Goal: A comprehensive technical walkthrough to understanding Fractal Cloud architecture and deploying a standard stack (Cluster + App + Database) via the SDK.


Part 1: Core Concepts & Architecture

Fractal Cloud facilitates a Component-Based Infrastructure approach, replacing imperative scripts with typed, object-oriented definitions. Before diving into the implementation, it is crucial to distinguish the core architectural entities.

The Fractal Object Model

  • Atom: A reusable infrastructure component (e.g., GKE Cluster, RDS Instance) with embedded standards.
  • Fractal (Blueprint): A reusable, governed architecture pattern composed of connected Atoms.
  • Live System: The running instance of a Fractal Blueprint, provisioned on a real cloud provider.

Operational Boundaries

  • Resource Group: A logical container for ownership and governance. It acts as a Bounded Context, isolating your Fractals and Live Systems to ensure consistent policy enforcement within a specific business domain.
  • Environment: A segment of your IT Landscape (e.g., "GCP Dev", "AWS Prod") on which systems are deployed. It maps one (or more) specific Cloud Provider target (e.g., a GCP Project or AWS Account), establishing the physical deployment boundary.
  • Automaton: The orchestration engine. It performs state reconciliation, calculating the diff between your code (The Fractal) and the actual cloud state, ensuring the Live System matches the desired state.

Part 2: The Pilot Project Scenario

We will implement a standard 3-Tier Architecture for a "Pilot Project" using the Fractal SDK (Prefer a visual approach? Check out the Your First Fractal Cloud Deployment with GUI tutorial to build this exact stack using the Design Canvas).
The stack consists of:

  1. Compute: A Google Kubernetes Engine (GKE) Cluster.
  2. Data: A managed PostgreSQL Database.
  3. Workload: A Web Application deployed onto the cluster.

We will simulate the collaboration between three distinct technical roles.


Part 3: Implementation Workflow

Role A: The Ops Engineer

Responsibility: Governance & Boundary Definition.

The Ops Engineer defines the Environment. This establishes the Bounded Context—specifically, where resources are allowed to be provisioned (Cloud Vendor, Region, Account ID).

Step 1: Initialize the Environment Context

The following code initializes the SDK and maps a Fractal Environment to a specific Google Cloud Project.

// === OPS ROLE: Defining the Environment ===

// 1. Gather Governance IDs (These come from your Cloud Provider & Fractal Console)
String resourceGroupId = System.getenv("FRACTAL_RESOURCE_GROUP_ID");
String environmentType = System.getenv("FRACTAL_ENVIRONMENT_TYPE");
String environmentOwnerId = System.getenv("FRACTAL_ENVIRONMENT_OWNER_ID");
String environmentShortName = System.getenv("FRACTAL_ENVIRONMENT_SHORT_NAME");
String gcpOrganizationId = System.getenv("GCP_ORGANIZATION_ID");
String gcpProjectId = System.getenv("GCP_PROJECT_ID");

GcpRegion region = GcpRegion.EUROPE_WEST1; // We mandate Europe West 1 for compliance

// 2. Initialize the Engine
Automaton automaton = Automaton.getInstance();

// 3. Define the Environment
// This tells Fractal: "Anything deployed to 'gcp-sprint-env' goes to this specific GCP Project"
var environment = automaton.getEnvironmentBuilder()
.withManagementEnvironment(ManagementEnvironment.builder()
.withId(new EnvironmentIdValue(
EnvironmentType.fromString(environmentType),
UUID.fromString(environmentOwnerId),
environmentShortName))
.withName("gcp-sprint-env")
.withResourceGroup(UUID.fromString(extractUuid(resourceGroupId)))
.withGcpCloudAgent(region, gcpOrganizationId, gcpProjectId)
.build())
.build();

Role B: The Platform Engineer

Responsibility: Standardization & Component Definition.

The Platform Engineer defines the Atoms. Their goal is to abstract complexity (networking, instance sizing) and expose safe, compliant building blocks to developers.

Step 2: Define the Compute Atom (GKE)

Here we define the cluster configuration. Note the strict typing on machineType; this enforces cost governance at the code level.

// === PLATFORM ROLE: Creating the Cluster Standard ===

GoogleKubernetesEngine.GoogleKubernetesEngineBuilder containerPlatform = GoogleKubernetesEngine.builder()
.withId("container-platform")
.withDisplayName("Container Platform")
.withRegion(region) // It inherits the region from Ops
.withNodePools(List.of(
GcpNodePool.builder()
.withName("nodes")
.withMachineType(E2_STANDARD2) // Enforcing standard machine types
.build()
));

Step 3: Define the Storage Atom (PostgreSQL)

We define a managed database instance. The networking complexity is abstracted; the Platform Engineer ensures it attaches to the correct default network within the Environment.

// === PLATFORM ROLE: Creating the Database Standard ===

GcpPostgreSqlDbms storage = GcpPostgreSqlDbms.builder()
.withId("file-blob-storage")
.withDisplayName("File & Blob Storage")
.withRegion(region)
.withNetwork("default") // Platform team manages the VPC settings
.withDatabase(
GcpPostgreSqlDatabase.builder()
.withId("storage-database")
.withDisplayName("Storage Database")
.build())
.build();

Role C: The Developer

Responsibility: Composition & Instantiation.

The Developer consumes the pre-validated Atoms provided by the Platform Team. They compose these and instantiate it.

Step 4: Define the Workload

The Developer attaches their specific application context (Git Repo, Branch) to the generic containerPlatform Atom.

// === DEVELOPER ROLE: Defining the App ===

GoogleKubernetesEngine webApp = containerPlatform
.withK8sWorkload(CaaSKubernetesWorkload.builder()
.withId("web-app-workload")
.withDisplayName("Web App Workload")
.withDescription("Web Application on Kubernetes")
.withNamespace("default")
// Here is the link to the actual application code
.withSSHRepositoryURI("git@github.com:YanchWare/fractal-samples.git")
.withRepoId("YanchWare/fractal-samples")
.withBranchName("env/prod")
.build())
.build();

Step 5: Instantiate the Live System

The Developer aggregates the components into a runtime system.
Blueprints are not authored by developers directly. They are defined and governed by the Platform Team.
Developers instantiate a governed Fractal Blueprint created by the Platform Team. This blueprint defines the static composition of infrastructure resources and their connections.
The Live System is the runtime instance derived from that Blueprint.
The automaton.instantiate() method triggers the reconciliation loop.

Key Technical Advantages:

  • Type Safety: Invalid configurations are caught at compile time, not deploy time.
  • Drift Detection: Rerunning this code will only apply deltas, correcting any drift in the cloud state.

This separation of concerns ensures that application teams can deploy safely, while platform and operations teams maintain control and compliance.

// === DEVELOPER ROLE: Deploying Everything ===

// 1. Define the System (The "Blueprint" for this specific deployment)
LiveSystemAggregate liveSystem = automaton.getLiveSystemBuilder()
.withId(new LiveSystemIdValue(resourceGroupId, "gcp-sprint-livesystem"))
.withFractalId(new FractalIdValue(resourceGroupId, "gcp-sprint-fractal", "v1.0"))
.withDescription("Fractal con Container Platform, Web App e Storage su Google Cloud")
.withComponents(List.of(
webApp, // ← Component: Web App (CustomWorkload)
storage // ← Component: File & Blob Storage (PostgreSQL)
))
.withStandardProvider(ProviderType.GCP)
.withEnvironmentId(environment.getManagementEnvironment().getId())
.build();

// 2. Execute Deployment (Creates the Fractal if it doesn't exist)
automaton.instantiate(List.of(liveSystem), InstantiationConfiguration.builder()
.withWaitConfiguration(InstantiationWaitConfiguration.builder()
.withTimeoutMinutes(60)
.withWaitForInstantiation(true)
.build())
.build());

Summary of Roles & Responsibilities

RoleFocusFractal ObjectObjective
Ops EngineerSecurity & GovernanceEnvironmentDefine the Bounded Context (Region, Cloud Account) and RBAC.
Platform EngineerStandardizationAtom / MoleculeCreate reusable, compliant building blocks (Clusters, DBs) with baked-in best practices.
DeveloperApplication LogicLive SystemCompose Atoms into a running system and deploy code self-service without tickets.