Observability
This feature is available in the Enterprise Plan and above. For more information, see our pricing plans or contact our sales team.
This guide explains how to configure observability in Upbound Spaces. Upbound provides integrated observability features built on OpenTelemetry to collect, process, and export logs, metrics, and traces.
Upbound Spaces offers two levels of observability:
- Space-level observability - Observes the cluster infrastructure where Spaces software is installed (Self-Hosted only)
- Control plane observability - Observes workloads running within individual control planes
Space-level observability (available since v1.6.0, GA in v1.14.0):
- Disabled by default
- Requires manual enablement and configuration
- Self-Hosted Spaces only
Control plane observability (available since v1.13.0, GA in v1.14.0):
- Enabled by default
- No additional configuration required
Prerequisites
Control plane observability is enabled by default. No additional setup is required.
Self-hosted Spaces
- Enable the observability feature when installing Spaces:
up space init --token-file="${SPACES_TOKEN_PATH}" "v${SPACES_VERSION}" \
...
--set "observability.enabled=true"
Set features.alpha.observability.enabled=true instead if using Spaces version
before v1.14.0.
-
Install OpenTelemetry Operator (required for Space-level observability):
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/download/v0.116.0/opentelemetry-operator.yamlimportantIf running Spaces
v1.11or later, use OpenTelemetry Operatorv0.110.0or later due to breaking changes.
Space-level Observability
Space-level observability is only available for self-hosted Spaces and allows administrators to observe the cluster infrastructure.
Configuration
Configure Space-level observability using the spacesCollector value in your
Spaces Helm chart:
observability:
spacesCollector:
config:
exporters:
otlphttp:
endpoint: "<your-endpoint>"
headers:
api-key: YOUR_API_KEY
exportPipeline:
logs:
- otlphttp
metrics:
- otlphttp
This configuration exports metrics and logs from:
- Crossplane installation
- Spaces infrastructure (controller, API, router, etc.)
provider-helmprovider-kubernetes
Router metrics
The Spaces router uses Envoy as a reverse proxy and automatically exposes metrics when you enable Space-level observability. These metrics provide visibility into:
- Traffic routing to control planes and services
- Request status codes, timeouts, and retries
- Circuit breaker state preventing cascading failures
- Client connection patterns and request volume
- Request latency (P50, P95, P99)
For more information about available metrics, example queries, and how to enable this feature, see the Space-level observability guide.
Control plane observability
Control plane observability collects telemetry data from workloads running
within individual control planes using SharedTelemetryConfig resources.
The pipeline deploys OpenTelemetry Collectors per
control plane, defined by a SharedTelemetryConfig at the group level.
Collectors pass data to external observability backends.
From Spaces v1.13 and beyond, telemetry only includes user-facing control
plane workloads (Crossplane, providers, functions).
Self-hosted users can include system workloads (api-server, etcd) by setting
observability.collectors.includeSystemTelemetry=true in Helm.
SharedTelemetryConfig
SharedTelemetryConfig is a group-scoped custom resource that defines telemetry
configuration for control planes.
New Relic example
apiVersion: observability.spaces.upbound.io/v1alpha1
kind: SharedTelemetryConfig
metadata:
name: newrelic
namespace: default
spec:
controlPlaneSelector:
labelSelectors:
- matchLabels:
org: foo
exporters:
otlphttp:
endpoint: https://otlp.nr-data.net
headers:
api-key: YOUR_API_KEY
exportPipeline:
metrics: [otlphttp]
traces: [otlphttp]
logs: [otlphttp]
Datadog Example
apiVersion: observability.spaces.upbound.io/v1alpha1
kind: SharedTelemetryConfig
metadata:
name: datadog
namespace: default
spec:
controlPlaneSelector:
labelSelectors:
- matchLabels:
org: foo
exporters:
datadog:
api:
site: ${DATADOG_SITE}
key: ${DATADOG_API_KEY}
exportPipeline:
metrics: [datadog]
traces: [datadog]
logs: [datadog]
Control plane selection
Use spec.controlPlaneSelector to specify which control planes should use the
telemetry configuration.
Label-based selection
spec:
controlPlaneSelector:
labelSelectors:
- matchLabels:
environment: production
Expression-based selection
spec:
controlPlaneSelector:
labelSelectors:
- matchExpressions:
- { key: environment, operator: In, values: [production,staging] }
Name-based selection
spec:
controlPlaneSelector:
names:
- controlplane-dev
- controlplane-staging
- controlplane-prod
Manage sensitive data
Available from Spaces v1.10
Store sensitive data in Kubernetes secrets and reference them in your
SharedTelemetryConfig:
-
Create the secret:
kubectl create secret generic sensitive -n <STC_NAMESPACE> \
--from-literal=apiKey='YOUR_API_KEY' -
Reference in SharedTelemetryConfig:
apiVersion: observability.spaces.upbound.io/v1alpha1
kind: SharedTelemetryConfig
metadata:
name: newrelic
spec:
configPatchSecretRefs:
- name: sensitive
key: apiKey
path: exporters.otlphttp.headers.api-key
controlPlaneSelector:
labelSelectors:
- matchLabels:
org: foo
exporters:
otlphttp:
endpoint: https://otlp.nr-data.net
headers:
api-key: dummy # Replaced by secret value
exportPipeline:
metrics: [otlphttp]
traces: [otlphttp]
logs: [otlphttp]
Telemetry processing
Available from Spaces v1.11
Configure processing pipelines to transform telemetry data using the transform processor.
Add labels to metrics
spec:
processors:
transform:
error_mode: ignore
metric_statements:
- context: datapoint
statements:
- set(attributes["newLabel"], "someLabel")
processorPipeline:
metrics: [transform]
Remove labels
From metrics:
processors:
transform:
metric_statements:
- context: datapoint
statements:
- delete_key(attributes, "kubernetes_namespace")
From logs:
processors:
transform:
log_statements:
- context: log
statements:
- delete_key(attributes, "log.file.name")
Modify log messages
processors:
transform:
log_statements:
- context: log
statements:
- set(attributes["original"], body)
- set(body, Concat(["log message:", body], " "))
Monitor status
Check the status of your SharedTelemetryConfig:
kubectl get stc
NAME SELECTED FAILED PROVISIONED AGE
datadog 1 0 1 63s
SELECTED: Number of control planes selectedFAILED: Number of control planes that failed provisioningPROVISIONED: Number of successfully running collectors
For detailed status information:
kubectl describe stc <name>
Supported exporters
Both Space-level and control plane observability support:
datadog- For Datadog integrationotlphttp- General-purpose exporter (used by New Relic, among others)debug- For troubleshooting
Considerations
- Control plane conflicts: Each control plane can only use one
SharedTelemetryConfig. Multiple configs selecting the same control plane conflict. - Custom collector image: Both Space-level and control plane observability use the same custom OpenTelemetry Collector image with supported exporters.
- Resource scope:
SharedTelemetryConfigresources are group-scoped, allowing different telemetry configurations per group.
For more advanced configuration options, review the Helm chart reference and OpenTelemetry Transformation Language documentation.