Create cloud resources with Upbound

In this guide, you’ll create a control plane for provisioning and managing cloud resources across AWS, Azure, or GCP. You’ll build reusable APIs that allow your development teams to deploy and configure infrastructure themselves.

By the end of this guide, you’ll have:

  1. A control plane project
  2. Composite Resources defining your cloud resources
  3. APIs for self-service infrastructure provisioning
  4. A streamlined infrastructure workflow

This approach allows you to efficiently manage cloud resources across multiple providers, enabling your organization to scale its online services while maintaining control and consistency.

Step 0: Prerequisites

This guide assumes you are already familiar with AWS, Azure, or GCP.

For this guide, you’ll need:

  • The Up CLI installed
  • An Upbound free-tier account
  • A cloud provider account with administrative access
  • Docker Desktop
  • Visual Studio Code
  • KCL or Python Visual Studio Code Extension
  • kubectl installed

Install the up CLI

To use Upbound, you’ll need to install the up CLI. You can download it as a binary package or with Homebrew.

curl -sL "https://cli.upbound.io" | sh
brew install upbound/tap/up

Verify your installation

The minimum supported version is v0.35.0. To verify your CLI installation and version, use the up version command:

up version

You should see the installed version of the up CLI. Since you aren’t logged in yet, Crossplane Version and Spaces Control Version returns unknown.

Login to Upbound

If you’ve installed the Up-Project-Action GitHub Action, you may skip this step.

Authenticate your CLI with your Upbound account by using the login command. This opens a browser window for you to log into your Upbound account.

  up login

Step 1: Create a new project

Upbound uses project directories containing configuration files to deploy infrastructure. Use the up project init command to create a project directory with the necessary scaffolding.

Init the project

  up project init upbound-qs && cd upbound-qs

The up project init command creates:

  • upbound.yaml: Project configuration file.
  • apis/: Directory for Crossplane composition definitions.
  • examples/: Directory for example claims.
  • .github/ and .vscode/: Directories for CI/CD and local development.

Step 2: Add project dependencies

up dependency add 'xpkg.upbound.io/upbound/provider-aws-s3:>=v1.17.0'

Providers in your project create external resources for Upbound to manage. Functions add logic to automate complex provisioning processes. After adding these dependencies, your upbound.yaml file’s dependsOn section should reflect the changes.

spec:
  dependsOn:
  - provider: xpkg.upbound.io/upbound/provider-aws-s3
    version: '>=v1.17.0'

Step 3: Create a claim and generate your API

Claims are the user facing resource of the API you define. The up CLI can generate compositions for you based on the minimal information you provide in the claim.

Run the following command to generate a new example claim. Choose Composite Resource Claim in your terminal and give it a name describing what it creates.

up example generate \
    --type claim \
    --api-group devexdemo.upbound.io \
    --api-version v1alpha1 \
    --kind StorageBucket \
    --name example \
    --namespace default

This command creates a minimal claim file. Copy and paste the claim below into the examples/storagebucket/example.yaml claim file.

AWS

apiVersion: devexdemo.example.com/v1alpha1
kind: StorageBucket
metadata:
    name: example
    namespace: default
spec:
    parameters:
        region: us-west-1
        versioning: true
        acl: public-read

This StorageBucket claim uses fields AWS requires to create an S3 bucket instance. You can discover required fields in the Marketplace for the provider.

Use this claim to generate a composite resource definition with the following command:

up xrd generate examples/storagebucket/example.yaml

This command generate a new Composite Resource Definition (XRD) file in apis/xstoragebuckets/definition.yaml. The XRD is a custom schema representation for the bucket API you defined in your claim. The up xrd generate command automatically infers the variable types for the XRD based on the input parameters in your example claim.

Step 4: Define your cloud resource composition

Next, generate a new composition based on your XRD. In the root of your control plane project, run up composition generate:

up composition generate apis/xstoragebuckets/definition.yaml

This command scaffolds a composition for you in apis/xstoragebuckets/composition.yaml

Next, define your composition logic with an embedded function. Embedded functions allow you to build, package, and manage reusable logic components to help automate and customize resource configurations in your control plane. You can author these functions in KCL or Python instead of manual patch and transforms in your YAML files.

Run the up function generate command and choose either KCL or Python.

up function generate test-function apis/xstoragebuckets/composition.yaml --language=<kcl or python>

This command generates an embedded function called test-function in the functions/test-function directory of your project. This also updates your composition file to include the new function in the pipeline.

Create an AWS Composition Function

Now, open up your function file (either main.k or main.py) and paste in the following to your function.

import models.io.upbound.aws.s3.v1beta1 as s3v1beta1

oxr = option("params").oxr # observed composite resource
params = oxr.spec.parameters

bucketName = "{}-bucket".format(oxr.metadata.name)

_metadata = lambda name: str -> any {
  {
    name = name
    annotations = {
      "krm.kcl.dev/composition-resource-name" = name
    }
  }
}

_items: [any] = [
    # Bucket in the desired region
    s3v1beta1.Bucket{
        metadata: _metadata(bucketName)
        spec = {
            forProvider = {
                region = params.region
            }
        }
    },
    s3v1beta1.BucketOwnershipControls{
        metadata: _metadata("{}-boc".format(oxr.metadata.name))
        spec = {
            forProvider = {
                bucketRef = {
                    name = bucketName
                }
                region = params.region
                rule:[{
                    objectOwnership:"BucketOwnerPreferred"
                }]
            }
        }
    },
    s3v1beta1.BucketPublicAccessBlock{
        metadata: _metadata("{}-pab".format(oxr.metadata.name))
        spec = {
            forProvider = {
                bucketRef = {
                    name = bucketName
                }
                region = params.region
                blockPublicAcls: False
                ignorePublicAcls: False
                restrictPublicBuckets: False
                blockPublicPolicy: False
            }
        }
    },
    # ACL for the bucket
    s3v1beta1.BucketACL{
        metadata: _metadata("{}-acl".format(oxr.metadata.name))
        spec = {
            forProvider = {
                bucketRef = {
                    name = bucketName
                }
                region = params.region
                acl = params.acl
            }
        }
    },
    # Default encryption for the bucket
    s3v1beta1.BucketServerSideEncryptionConfiguration{
        metadata: _metadata("{}-encryption".format(oxr.metadata.name))
        spec = {
            forProvider = {
                region = params.region
                bucketRef = {
                    name = bucketName
                }
                rule = [
                    {
                        applyServerSideEncryptionByDefault = [
                            {
                                sseAlgorithm = "AES256"
                            }
                        ]
                        bucketKeyEnabled = True
                    }
                ]
            }
        }
    }
]

# Set up versioning for the bucket if desired
if params.versioning:
    _items += [
        s3v1beta1.BucketVersioning{
            metadata: _metadata("{}-versioning".format(oxr.metadata.name))
            spec = {
                forProvider = {
                    region = params.region
                    bucketRef = {
                        name = bucketName
                    }
                    versioningConfiguration = [
                        {
                            status = "Enabled"
                        }
                    ]
                }
            }
        }
    ]

items = _items
from crossplane.function import resource
from crossplane.function.proto.v1 import run_function_pb2 as fnv1

from .model.io.k8s.apimachinery.pkg.apis.meta import v1 as metav1
from .model.com.example.platform.xstoragebucket import v1alpha1
from .model.io.upbound.aws.s3.bucket import v1beta1 as bucketv1beta1
from .model.io.upbound.aws.s3.bucketacl import v1beta1 as aclv1beta1
from .model.io.upbound.aws.s3.bucketownershipcontrols import v1beta1 as bocv1beta1
from .model.io.upbound.aws.s3.bucketpublicaccessblock import v1beta1 as pabv1beta1
from .model.io.upbound.aws.s3.bucketversioning import v1beta1 as verv1beta1
from .model.io.upbound.aws.s3.bucketserversideencryptionconfiguration import v1beta1 as ssev1beta1


def compose(req: fnv1.RunFunctionRequest, rsp: fnv1.RunFunctionResponse):
    observed_xr = v1alpha1.XStorageBucket(**req.observed.composite.resource)
    params = observed_xr.spec.parameters

    desired_bucket = bucketv1beta1.Bucket(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="Bucket",
        spec=bucketv1beta1.Spec(
            forProvider=bucketv1beta1.ForProvider(
                region=params.region,
            ),
        ),
    )
    resource.update(rsp.desired.resources["bucket"], desired_bucket)

    if "bucket" not in req.observed.resources:
        return

    observed_bucket = bucketv1beta1.Bucket(**req.observed.resources["bucket"].resource)
    if observed_bucket.metadata is None or observed_bucket.metadata.annotations is None:
        return
    if "crossplane.io/external-name" not in observed_bucket.metadata.annotations:
        return

    bucket_external_name = observed_bucket.metadata.annotations[
        "crossplane.io/external-name"
    ]

    desired_acl = aclv1beta1.BucketACL(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="BucketACL",
        spec=aclv1beta1.Spec(
            forProvider=aclv1beta1.ForProvider(
                region=params.region,
                bucket=bucket_external_name,
                acl=params.acl,
            ),
        ),
    )
    resource.update(rsp.desired.resources["acl"], desired_acl)

    desired_boc = bocv1beta1.BucketOwnershipControls(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="BucketOwnershipControls",
        spec=bocv1beta1.Spec(
            forProvider=bocv1beta1.ForProvider(
                region=params.region,
                bucket=bucket_external_name,
                rule=[
                    bocv1beta1.RuleItem(
                        objectOwnership="BucketOwnerPreferred",
                    ),
                ],
            )
        ),
    )
    resource.update(rsp.desired.resources["boc"], desired_boc)

    desired_pab = pabv1beta1.BucketPublicAccessBlock(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="BucketPublicAccessBlock",
        spec=pabv1beta1.Spec(
            forProvider=pabv1beta1.ForProvider(
                region=params.region,
                bucket=bucket_external_name,
                blockPublicAcls=False,
                ignorePublicAcls=False,
                restrictPublicBuckets=False,
                blockPublicPolicy=False,
            )
        ),
    )
    resource.update(rsp.desired.resources["pab"], desired_pab)

    desired_sse = ssev1beta1.BucketServerSideEncryptionConfiguration(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="BucketServerSideEncryptionConfiguration",
        spec=ssev1beta1.Spec(
            forProvider=ssev1beta1.ForProvider(
                region=params.region,
                bucket=bucket_external_name,
                rule=[
                    ssev1beta1.RuleItem(
                        applyServerSideEncryptionByDefault=[
                            ssev1beta1.ApplyServerSideEncryptionByDefaultItem(
                                sseAlgorithm="AES256",
                            ),
                        ],
                        bucketKeyEnabled=True,
                    ),
                ],
            ),
        ),
    )
    resource.update(rsp.desired.resources["sse"], desired_sse)

    if not params.versioning:
        return

    desired_versioning = verv1beta1.BucketVersioning(
        apiVersion="s3.aws.upbound.io/v1beta1",
        kind="BucketVersioning",
        spec=verv1beta1.Spec(
            forProvider=verv1beta1.ForProvider(
                region=params.region,
                bucket=bucket_external_name,
                versioningConfiguration=[
                    verv1beta1.VersioningConfigurationItem(
                        status="Enabled",
                    ),
                ],
            ),
        ),
    )
    resource.update(rsp.desired.resources["versioning"], desired_versioning)

When you create a function, the up CLI automatically adds import statements to bring the schemas into your functions.

VSCode extensions for KCL and Python infer the schemas and bring you more authoring capabilities like autocompletion, linting for type mismatches, missing variables and more.

With KCL or Python, you authored composite resources that you defined in the XRD and wrote custom logic to generate server-side encryption on your bucket.

Next, run and test your composition.

Step 5: Run and test your project

Use the up project run command to run and test your control plane project on a development control plane hosted in Upbound’s Cloud.

up project run

This command creates a development control plane in the Upbound Cloud and deploys your project’s package to it.

Next validate your control plane project state to verify the resources created by locally invoking the API.

Update your up CLI context to your control plane which uses the name of your control plane project (upbound-qs) by default.

up ctx ./upbound-qs

Create provider credentials

Your project configuration now includes your provider dependency and requires an authentication method.

A ProviderConfig is a custom resource that defines how your control plane authenticates and connects with cloud providers like AWS. It acts as a configuration bridge between your control plane’s managed resources and the cloud provider’s API.

Tip
For more detailed instructions or alternate authentication methods, visit the provider documentation.

Using AWS access keys, or long-term IAM credentials, requires storing the AWS keys as a control plane secret. To create the secret download your AWS access key ID and secret access key. Create a new file called aws-credentials.txt and paste your AWS access key ID and secret access key.

[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY

Next, create a new secret to store your credentials in your control plane. The kubectl create secret command puts your AWS login details in the control plane secure storage:

kubectl create secret generic aws-secret \
    -n crossplane-system \
    --from-file=my-aws-secret=./aws-credentials.txt

Next, create a new file called provider-config.yaml and paste the configuration below.

apiVersion: aws.upbound.io/v1beta1
kind: ProviderConfig
metadata:
  name: default
spec:
  credentials:
    source: Secret
    secretRef:
      namespace: crossplane-system
      name: aws-secret
      key: my-aws-secret

Lastly, apply the provider configuration.

kubectl apply -f provider-config.yaml

When you create a composition and deploy with the control plane, Upbound uses the ProviderConfig to locate and retrieve the credentials in the secret store.

Apply your claim

Apply the example claim with kubectl.

kubectl apply -f examples/storagebucket/example.yaml

Return the resource state with the up CLI.

up alpha get managed -o yaml

Now, you can validate your results through the Upbound Console, and make any changes to test your resources required.

Step 6: Build and push your project to the Upbound Marketplace

When you’re ready to share your work, you can build your project and publish it to the Upbound Marketplace with the up CLI.

Building your control plane project

To build your control plane project, use the up project build command.

up project build

This command takes your project’s dependencies and metadata and compiles it into a single OCI image at _output/upbound-qs-1.uppkg.

Pushing your control plane project to the Upbound Marketplace

Login to Upbound.

up login

Next, push the project.

up project push

Your package is now pushed to the Upbound Marketplace.

Try it out

With your control plane project set up, go to Upbound’s Consumer Portal guide to create resources in your cloud service provider.