Use Azure Kubernetes Fleet Manager Automated Deployments to drive multi-cluster resource placement (Preview)

Azure Kubernetes Fleet Manager Automated Deployments can be used to build and deploy an application from a code repository to one or more AKS cluster in a fleet. Automated deployments simplify the process of setting up a GitHub Action workflow to build and deploy your code. Once connected, every new commit you make runs the pipeline.

Automated Deployments build on draft.sh. When you create a new deployment workflow, you can use an existing Dockerfile, generate a Dockerfile, use existing Kubernetes manifests, or generate Kubernetes manifests. The generated manifests are created with security and resiliency best practices in mind.

Important

Azure Kubernetes Fleet Manager preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Azure Kubernetes Fleet Manager previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.

Prerequisites

Bring your application source code

Find your Azure Kubernetes Fleet Manager and start Automated Deployments configuration.

  1. In the Azure portal, search for Kubernetes Fleet Manager in the top search bar.
  2. Select Kubernetes fleet manager in the search results.
  3. Select your Azure Kubernetes Fleet Manager from the resource list.
  4. From the Fleet Manager service menu, under Fleet resources, select Automated deployments.
  5. Select + Create in the top menu. If this deployment is your first, you can also select Create in the deployment list area.

Screenshot showing create options for Fleet Manager Automated Deployments.

Connect to source code repository

Create a workflow and authorize it to connect to the desired source code repository and branch. Complete the following details on the Repository tab.

  1. Enter a meaningful name for the workflow in the Workflow name field.
  2. Next, select Authorize access to authorize a user against GitHub to obtain a list of repositories.
  3. For Repository source select either:
    1. My repositories for repositories the currently authorized GitHub user owns, or;
    2. All repositories for repositories the currently authorized GitHub user can access. You need to select the Organization which owns the repository.
  4. Choose the Repository and Branch.
  5. Select the Next button.

Screenshot showing selection of source control repository when setting up Fleet Manager Automated Deployments.

Specify application image and deployment configuration

To prepare an application to run on Kubernetes, you need to build it into a container image which you store in a container registry. A Dockerfile provides instructions on how to build the container image. If your source code repository doesn't already have a Dockerfile, Automated Deployments can generate one for you.

If your code repository already has a Dockerfile, you can use it to build the application image.

  1. Select Existing Dockerfile for the Container configuration.
  2. Select the Dockerfile from your repository.
  3. Enter the Dockerfile build context to pass the set of files and directories to the build process (typically the same folder as the Dockerfile). These files are used to build the image, and they're included in the final image, unless they're ignored through inclusion in a .dockerignore file.
  4. Select an existing Azure Container Registry. This registry is used to store the built application image. Any AKS cluster that can receive the application must be given AcrPull permissions on the Registry.
  5. Set the Azure Container Registry image name. You must use this image name in your Kubernetes deployment manifests.

Screenshot showing selection of an existing Dockerfile when setting up Fleet Manager Automated Deployments.

Choose the Kubernetes manifest configuration

An application running on Kubernetes consists of many Kubernetes primitive components. These components describe what container image to use, how many replicas to run, if there's a public IP required to expose the application, etc. For more information, see the official Kubernetes documentation. If your source code repository doesn't already have Kubernetes manifests, Automated Deployments can generate them for you. You can also choose an existing Helm chart.

Warning

Don't choose the fleet-system or any of the fleet-member namespaces as these are internal namespaces used by Fleet Manager and can't be used to place your application.

If your code repository already has a Kubernetes manifest, you can select it to control what workload is deployed and configured by a cluster resource placement.

  1. Select Use existing Kubernetes manifest deployment files for the deployment options.
  2. Select the Kubernetes manifest file or folder from your repository.
  3. Select the existing Fleet Manager Namespace to stage the workload in.
  4. Select the Next button.

Screenshot showing selection of an existing Kubernetes manifest when setting up Fleet Manager Automated Deployments.

Review configuration

Review the configuration for the repository, image, and deployment configuration.

Screenshot showing the configuration of an Automated Deployment so it can be reviewed before being submitted.

Select Next to start the process which performs these actions:

  1. Create federated credentials to allow the GitHub Action to:
    1. Push the built container image to the Azure Container Registry.
    2. Stage the Kubernetes manifests into the selected namespace on the Fleet Manager hub cluster.
  2. Create a pull request on the code repository with any generated files and the workflow.

Setup takes a few minutes, so don't navigate away from the Deploy page.

Screenshot showing Automated Deployment configuration completed with a button to approve the pull request.

Review and merge pull request

When the Deploy stage finishes, select the Approve pull request button to open the generated pull request on your code repository.

Note

There's a known issue with the naming of the generated pull request where it says it's deploying to AKS. Despite this incorrect name, the resulting workflow does stage your workload on the Fleet Manager hub cluster in the namespace you selected.

Screenshot of pull request on GitHub.

  1. Review the changes under Files changed and make any desired edits.
  2. Select Merge pull request to merge the changes into your code repository.

Merging the change runs the GitHub Actions workflow that builds your application into a container image and stores it in the selected Azure Container Registry.

Screenshot showing GitHub Actions workflow in progress.

Check the generated resources

After the pipeline is completed, you can review the created container image on the Azure portal by locating the Azure Container Registry instance you configured and opening the image repository.

Screenshot of the built container image hosted in an Azure Container Registry repository.

You can also view the Fleet Manager Automated Deployment configuration. Selecting the workflow name opens GitHub on the GitHub Action.

Screenshot of the configured Fleet Manager Automated Deployment, including a link to the GitHub Action.

Finally, you can confirm the expected Kubernetes resources are staged on the Fleet Manager hub cluster.

az fleet get-credentials \
    --resource-group ${GROUP} \
    --name ${FLEET}
kubectl describe deployments virtual-worker --namespace=contoso-store

Output looks similar to the below. The replicas of 0 is expected as workloads aren't scheduled on the Fleet Manager hub cluster. Make sure that the Image property points at the built image in your Azure Container Registry.

Name:                   virtual-worker
Namespace:              contoso-store
CreationTimestamp:      Thu, 24 Apr 2025 01:56:49 +0000
Labels:                 workflow=actions.github.com-k8s-deploy
                        workflowFriendlyName=contoso-store-deploy
Annotations:            <none>
Selector:               app=virtual-worker
Replicas:               1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=virtual-worker
  Containers:
   virtual-worker:
    Image:      contoso03.azurecr.io/virtual-worker:latest
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     2m
      memory:  20Mi
    Requests:
      cpu:     1m
      memory:  1Mi
    Environment:
      MAKELINE_SERVICE_URL:  http://makeline-service:3001
      ORDERS_PER_HOUR:       100
    Mounts:                  <none>
  Volumes:                   <none>
  Node-Selectors:            kubernetes.io/os=linux
  Tolerations:               <none>
Events:                      <none>

Define cluster resource placement

During preview, to configure placement of your staged workload on to member clusters you can follow these steps.

  1. In the source code repository for your application, create a folder in the repository root called fleet.

  2. Create a new file deploy-contoso-ns-two-regions.yaml in the fleet folder and add the contents shown. This sample deploys the contoso-store namespace across two clusters that must be in two different Azure regions.

    apiVersion: placement.kubernetes-fleet.io/v1
    kind: ClusterResourcePlacement
    metadata:
      name: distribute-virtual-worker-two-regions
    spec:
      resourceSelectors:
        - group: ""
          kind: Namespace
          version: v1          
          name: contoso-store
      policy:
        placementType: PickN
        numberOfClusters: 2
        topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: region
          whenUnsatisfiable: DoNotSchedule
    
  3. Modify the GitHub Action workflow file, adding a reference to the CRP file.

    DEPLOYMENT_MANIFEST_PATH: |
      ./virtual-worker.yaml
      ./fleet/deploy-contoso-ns-two-regions.yaml
    
  4. Commit the new CRP manifest and updated GitHub Action workflow file.

  5. Check the workload is placed according to the policy defined in the CRP definition. You check either using the Azure portal or kubectl at the command line.

Screenshot of the Fleet Manager Resource Placements showing a successfully completed placement.

Next steps