Deploying Custom Workloads with the Fractal SDK
Deploying Custom Workloads
The Fractal SDK provides a powerful and flexible way to deploy custom workloads within your existing Live Systems. This guide will walk you through the steps and best practices to make your deployments successful.
Currently, custom workload deployments are supported for Kubernetes clusters, with potential future support for other environments like Azure Web Apps.
Understanding Live Systems and Custom Workloads
Before diving into the deployment process, it's crucial to understand two key concepts:
- Live System: A Live System is a running instance of a Fractal Blueprint. It represents the actual infrastructure and applications deployed in your chosen cloud environment. Think of it as the "living embodiment" of your Fractal design, which may include Kubernetes clusters or, potentially in the future, Azure Web Apps.
- Custom Workload: A custom workload is a component that you design and develop to run within your Live System. Currently, this typically means deploying to a Kubernetes cluster within your Live System, but future implementations may support other environments.
Preparing Your Custom Workload
Before deploying, you need to prepare your custom workload. This involves setting up your deployment files (Kubernetes manifests, Helm charts, or a combination) in a Git repository. See the following guide for detailed instructions:
This guide covers:
- Repository structure and file naming conventions (including the crucial
-fdeploy.yaml
or-fdeploy.yml
suffix for Kubernetes manifests andcommands.yaml
for Helm charts). - Parameterization using
fractal-parameters.yml
. - Combining Kubernetes manifests and Helm charts in a single repository.
- Prerequisites for setting up your repository and container images.
Setting Up Your Fractal Environment
Before deploying custom workloads, you need a properly configured Fractal Cloud environment. This involves two key steps: creating the Fractal Cloud Environment and then instantiating a Live System with a Kubernetes cluster.
1. Create a Fractal Cloud Environment:
You first need to establish your Fractal Cloud Environment. This provides the foundation for your Live Systems. We recommend using our quick start template to set up production and non-production environments:
- Fractal Quick Start Environments: https://github.com/YanchWare/fractal-quick-start-environments/
This repository provides a sample setup with production and non-production environments, demonstrating best practices like Azure CAF adoption, private networking, and secure service deployments. It creates Fractal Cloud Management Environments, each with a Fractal Cloud agent, and corresponding Operational Environments.
2. Instantiate a Live System:
Once your Fractal Cloud Environment is in place, you need to create a Live System within it. This Live System will contain the Kubernetes cluster where your custom workload will reside. Use the following template as a starting point:
- Fractal Quick Start Architecture: https://github.com/YanchWare/fractal-quick-start-architecture
This repository helps you instantiate shared infrastructure, including Fractals and initial Live Systems, within your previously created environments. It also sets up an observability stack on each Kubernetes cluster. Critically, it deploys Live Systems with Kubernetes clusters, which are a prerequisite for deploying custom workloads.
Prerequisites for Custom Workload Deployment
1. Fractal SDK
Ensure you have the Fractal Java SDK integrated into your project. You can find instructions on how to set it up in the Quick Start guide.
2. Existing Live System with a Compatible Environment
You must have an existing Live System that includes a compatible environment where you want to deploy the custom workload. Currently, this means your Live System must contain a provisioned and running Kubernetes cluster. However, keep an eye on future updates as support for other services, like Azure Web Apps, will be added.
3. Custom Workload Repository:
A Git repository containing your Kubernetes manifests or Helm charts, structured as described in the "Create Kubernetes Workload" guide.
This includes setting up your .fractal
directory, parameter files, and any Helm chart files.
The standard behaviour of the custom workload agent is branch-based, This means that the agent will try to find a
branch named after the short-name of the environment.
This behaviour can be overridden by specifying a default branch name in a CiCd Profile or in the Custom Workload Definition.
4. Container Image: You are responsible for building and pushing your container image to a registry.
5. Custom Workload Component
Your custom workload should be defined appropriately for the target environment.
- For Kubernetes clusters, this typically means a valid Kubernetes manifest (YAML or JSON) defining Deployments, StatefulSets, DaemonSets, or other Kubernetes resources.
- For future environments like Azure Web Apps, the workload definition might take a different form (e.g., deployment packages, configuration files). Ensure that your workload definition is compatible with your chosen environment's configuration and capabilities.
6. Commit ID
If you are using version control for your custom workloads, you'll need the specific commit ID of the version you want to deploy.
Understanding the deployCustomWorkload
Methods
The Automaton.deployCustomWorkload
method is the core of custom workload deployment. It offers two overloads to cater to different deployment scenarios:
1. Deployment with Waiting and Verification (Recommended)
public static void deployCustomWorkload(
String resourceGroupId,
String liveSystemName,
String customWorkloadComponentId,
String commitId,
InstantiationConfiguration config
) throws ComponentInstantiationException
This overload is ideal when you need more control over the deployment process.
By providing an InstantiationConfiguration
object, you can instruct the SDK to wait for deployment completion and verify that the deployed workload matches the specified commitId
.
Key Advantages:
- Reliability: Ensures the workload is deployed successfully before your application proceeds.
- Version Control: Verifies that the correct version of the workload is deployed.
- Detailed Logging: Provides logging of the workload's output for debugging and analysis.
2. Fire-and-Forget Deployment
public static void deployCustomWorkload(
String resourceGroupId,
String liveSystemName,
String customWorkloadComponentId
) throws ComponentInstantiationException
This simplified overload is perfect for scenarios where you want to trigger the deployment and don't require immediate feedback or verification. It's faster and less resource-intensive.
Use Cases:
- Background tasks or asynchronous workflows
- Situations where deployment failures are not critical
Exceptions:
ComponentInstantiationException
: Thrown if any of the required parameters (resourceGroupId, liveSystemName, customWorkloadComponentId, commitId) are null or empty or the component is not found, the deployment fails, or an error occurs while waiting for deployment completion. This exception may wrap an underlying InstantiatorException for specific error scenarios.
Example: Deployment with Waiting and Verification
try {
InstantiationConfiguration config = InstantiationConfiguration.builder()
.withWaitConfiguration(
InstantiationWaitConfiguration.builder()
.withTimeoutMinutes(10) // Optional timeout
.withWaitForInstantiation(true)
.build()
)
.build();
Automaton.deployCustomWorkload("myResourceGroup", "myLiveSystem", "customComponent1", "commitXYZ123", config);
System.out.println("Deployment successful!");
} catch (ComponentInstantiationException | InstantiatorException e) {
System.err.println("Deployment failed: " + e.getMessage());
// Handle exceptions (log, retry, etc.)
}
Example: Fire-and-Forget Deployment
try {
Automaton.deployCustomWorkload("myResourceGroup", "myLiveSystem", "customComponent1");
} catch (ComponentInstantiationException e) {
System.err.println("Deployment trigger failed: " + e.getMessage());
// Handle exceptions (log, retry, etc.)
}