Create Kubernetes Workload
This section explains how to create and deploy a Kubernetes Workload to your Kubernetes Cluster, regardless of your cloud provider,using the Fractal Cloud SDK.
Prerequisites (Customer Responsibilities)
Before you begin, ensure you have completed the following:
-
Repository Creation: You must create a repository (e.g., on GitHub, GitLab, or any other Git platform) to store your Kubernetes manifest files and/or Helm charts. This repository will be used by Fractal Cloud to deploy your workload.
-
Container Image: You are responsible for building and pushing your container image to a container registry of your choice (e.g., Docker Hub, Google Container Registry, Amazon ECR, Azure Container Registry). Ensure the image is accessible from your Kubernetes cluster. You will reference this image in your Kubernetes manifests or Helm charts.
-
SSH Key Secrets: Fractal Cloud uses SSH to clone your Git repository. The preferred way to provide your private SSH key and passphrase is to use a
DefaultCiCdProfile
at the Environment level. This profile stores your credentials securely and makes them accessible to your Kubernetes Workloads.
- Creating a DefaultCiCdProfile: You can create a
DefaultCiCdProfile
when initializing your environment using the Fractal SDK. The following Java code snippet demonstrates how to add aDefaultCiCdProfile
to a Management Environment:
var automaton = Automaton.getInstance();
var defaultCiCdProfile = new CiCdProfile( // Create the profile
"my-cicd-profile", // Short name
"My CI/CD Profile", // Display name
"Profile for deploying workloads", // Description (optional)
privateSSHKeyData, // Your private key data
privateSSHKeyPassphrase // Your passphrase
);
ManagementEnvironment managementEnvironment = ManagementEnvironment.builder()
.withId(new EnvironmentIdValue(EnvironmentType.PERSONAL,
EnvironmentOwnerId,
ManagementEnvironmentShortName))
.withResourceGroup(UUID.fromString(FractalResourceGroupId))
.withAzureCloudAgent(DefaultAzureRegion,
AzureTenantId,
AzureManagementSubscriptionId)
.withTags(Map.of("Type", "Management"))
.withDefaultCiCdProfile(defaultCiCdProfile) // Add the profile
.build();
automaton.instantiate(managementEnvironment);
-
Alternative: Using Environment Secrets: While DefaultCiCdProfile is the preferred method, you can still create secrets directly in your Fractal Cloud Environment to store your SSH key and passphrase. To do this, create separate secrets for your private SSH key and its passphrase. You will then reference these secrets by their Secret Short Names in your Kubernetes Workload configuration.
- Example: If your secret short name for your private key is
my-ssh-key
, you would usemy-ssh-key
when configuring your Kubernetes Workload (see examples below).
- Example: If your secret short name for your private key is
-
Benefits of DefaultCiCdProfile: Using a
DefaultCiCdProfile
provides several benefits:- Centralized Management: Keeps your CI/CD credentials organized in one place.
- Improved Security: Enhances security by storing credentials in a dedicated profile.
- Simplified Configuration: Reduces the need to define individual secrets for each workload.
-
Finding Your Environment Short Name: You can find the short name of your environment by navigating to the Fractal Cloud Environments page. Select the desired environment, click the
Edit
button, and you will see theShort Name
displayed.
Setup
Kubernetes Workload deployment requires specific setup steps.
Branch naming strategy
The branch naming strategy is crucial for deployments.
We recommend using env/{environment-short-name}
(e.g., env/production
, env/staging
).
This allows you to manage different configurations for each environment.
Deployment files
Create a .fractal
folder at the root of your repository. Fractal Cloud uses this folder for deployment configurations.
You can combine Kubernetes manifests and Helm charts within this structure.
Kubernetes Manifests
For deploying Kubernetes manifests, create a file named {component-id}-fdeploy.yml
(e.g., key-vault-quick-start-fdeploy.yml
)
inside the .fractal
folder. This file contains the deployment specifications.
Parameters are retrieved from fractal-parameters.yml
, also in the .fractal
directory.
Example:
.fractal/
├── key-vault-quick-start-fdeploy.yml
└── fractal-parameters.yml
key-vault-quick-start-fdeploy.yml
(example):
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: key-vault-quick-start
namespace: demo
labels:
app: key-vault-quick-start
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: key-vault-quick-start
version: v1
template:
metadata:
labels:
azure.workload.identity/use: "true"
app: key-vault-quick-start
commit: "$COMMIT_ID"
version: v1
spec:
serviceAccountName: key-vault-quick-start
containers:
- name: oidc
image: ghcr.io/azure/azure-workload-identity/msal-go
env:
- name: KEYVAULT_URL
valueFrom:
configMapKeyRef:
name: key-vault-info
key: uri
- name: SECRET_NAME
value: "$SECRET_NAME"
nodeSelector:
kubernetes.io/os: linux
fractal-parameters.yml
(example):
environments:
- name: production
parameters:
SECRET_NAME: quick-start-production
COMMIT_ID: 0000000000000000000000000000000000000000
- name: staging
parameters:
SECRET_NAME: quick-start-staging
COMMIT_ID: 9999999999999999999999999999999999999999
This structure allows you to define environment-specific parameters (like image names and replica counts, secret_name, commit_id) within the fractal-parameters.yml
file, while the fdeploy.yml
file defines the general Kubernetes resource configuration.
The $<parameter_key>
syntax allows you to inject these parameters into your Kubernetes manifests.
Helm Charts
For deploying Helm charts, create a .helm
directory within the .fractal
directory. Place your Helm chart files within the .helm
directory.
You'll also need a commands.yaml
file within the .helm
directory to specify the Helm commands.
.fractal/
├── .helm/
│ ├── commands.yaml
│ └── values.yaml
commands.yaml
(example):
commands:
- helm repo add bitnami [https://charts.bitnami.com/bitnami](https://charts.bitnami.com/bitnami) # Add Helm repo (if needed)
- helm repo update # Update Helm repos (if needed)
- helm upgrade --force -i redis-demo bitnami/redis # Install/upgrade Helm chart
The commands.yaml
file defines the Helm commands to execute.
Combining Manifests and Helm
You can deploy both manifests and Helm charts from the same repository.
Fractal Cloud will process both the Kubernetes manifests in the root of .fractal
and the Helm charts defined in
the .helm
directory using the corresponding configurations.
This allows you to manage related but distinct parts of your application deployment within a single repository.
Accessing Environment Secrets from Your Workload
In addition to the SSH keys used for repository access, you can also provide your Kubernetes Workload with access to other secrets defined in your Fractal Cloud Environment. This can be useful for storing sensitive information like database credentials or API keys.
To grant your workload access to these secrets, use the withEnvironmentSecretShortName
or withEnvironmentSecretShortNames
methods in the CaaSKubernetesWorkload builder.
Managed Identity Access: When you use these methods, Fractal Cloud automatically grants read-only access to the specified secrets. This ensures that your workload can securely access the secrets it needs without requiring you to manage credentials directly.
Deploying with the Fractal SDK
You now use the Fractal SDK to trigger deployments. The SDK handles authentication and interacts with the Fractal Cloud platform to deploy your workload. Here's a general outline (refer to the SDK documentation for specific details and the latest API):
- SDK Initialization: Initialize the Fractal SDK in your deployment script.
- Authentication: Authenticate with the Fractal platform.
- Deployment Call: Use the
deployCustomWorkload
method in the SDK. You'll need to provide:
- The resource group ID.
- The Live System name.
- The component ID (this identifies your workload within Fractal).
- The commit ID (of your Git repository, if you're using version control - highly recommended).
- Any necessary deployment configuration (e.g., timeout settings, wait for completion).
Example (Conceptual - adapt to the actual SDK API):
// ... SDK Initialization and Authentication ...
String resourceGroupId = "myResourceGroup";
String liveSystemName = "myLiveSystem";
String componentId = "my-app-component"; // Matches the workload name
String commitId = "abcdef123456"; // Your Git commit ID
try {
Automaton.deployCustomWorkload(resourceGroupId, liveSystemName, componentId, commitId, config); // config could include wait settings
System.out.println("Deployment triggered successfully.");
} catch (ComponentInstantiationException e) {
System.err.println("Deployment failed: " + e.getMessage());
// Handle the exception appropriately
}
Kubernetes Workload in AKS
public static AzureKubernetesService getAksWithCustomWorkload(String id) {
return AzureKubernetesService.builder()
.withId(id)
.withRegion(EUROPE_WEST)
.withNodePools(getNodePools())
.withK8sWorkload(getK8sWorkload())
.build();
}
public static CaaSKubernetesWorkload getK8sWorkload() {
return CaaSKubernetesWorkload.builder()
.withId("fractal-samples")
.withDescription("Fractal Service on K8S")
.withNamespace("fractal")
.withSSHRepositoryURI("git@github.com:YanchWare/fractal-samples.git")
.withRepoId("YanchWare/fractal-samples")
.withBranchName("env/prod")
.withEnvironmentSecretShortName("my-secret-name") // Add secret access
// Optional: Use a specific CI/CD profile
//.withCiCdProfileShortName("my-other-cicd-profile")
.build();
}
public static Collection<? extends AzureNodePool> getNodePools() {
return List.of(
AzureNodePool.builder()
.withName("linuxdynamic")
.withMachineType(STANDARD_B2S)
.build()
);
}
For more details you can check the code on GitHub in our samples repository for Custom Workload in AKS.
Kubernetes Workload in GKE
public static GoogleKubernetesEngine getGke(String id) {
return GoogleKubernetesEngine.builder()
.withId(id)
.withRegion(EU_WEST1)
.withNodePools(getNodePools())
.withK8sWorkload(getK8sWorkload())
.build();
}
public static CaaSKubernetesWorkload getK8sWorkload() {
return CaaSKubernetesWorkload.builder()
.withId("fractal-samples")
.withDescription("Fractal Service on K8S")
.withNamespace("fractal")
.withSSHRepositoryURI("git@github.com:YanchWare/fractal-samples.git")
.withRepoId("YanchWare/fractal-samples")
.withBranchName("env/prod")
.withEnvironmentSecretShortName("my-secret-name") // Add secret access
// Optional: Use a specific CI/CD profile
//.withCiCdProfileShortName("my-other-cicd-profile")
.build();
}
public static Collection<? extends GcpNodePool> getNodePools() {
return List.of(
GcpNodePool.builder()
.withName("nodes")
.withMachineType(E2_STANDARD2)
.build()
);
}
For more details you can check the code on GitHub in our samples repository for Custom Workload in GKE.