Triton Cloud Load Balancer

Modified: 10 Jul 2025 13:04 UTC

The Triton Cloud Load Balancer provides an easy to use persistent load balancer. The Cloud Load Balancer can be used in conjunction with CNS to help reduce down time and service interruptions even further.

The Triton Cloud Load Balancer Image

Unlike Triton NFS volumes, load balancer instances are entirely user-servicable and are distributed as a standard image. No direct interaction with a CLB instance is necessary. It can be entirely configured and operated via metadata. But you can also ssh directly to it and examine how it operates, or make changes as necessary.

Load Balancer Configuration

The Load Balancer is configured using metadata keys.

Key Description
cloud.tritoncompute:loadbalancer This must be present and set to true
cloud.tritoncompute:portmap This configures the listening ports to backend mapping
cloud.tritoncompute:max_rs Maximum number of backend servers. Defaults to 32
cloud.tritoncompute:certificate_name Subject Common Name for generating a TLS certificate
cloud.tritoncompute:metrics_acl Space or comma-separated list of IP prefixes that are allowed to access the metrics endpoint
cloud.tritoncompute:metrics_port Port number for the metrics endpoint. Defaults to 8405

Portmap Service Designations

The cloud.tritoncompute:portmap metadata key is a list of service designations separated by commas or spaces.

A service designation uses the following syntax:

<type>://<listen port>:<backend name>[:<backend port>][{health check params}]

Health Check Configuration

Health checks can be configured using a JSON-like syntax appended to service designations. The parameters are enclosed in curly braces {} and use comma-separated key:value pairs.

Supported Parameters

Health Check Syntax

{check:/endpoint,port:9000,rise:2,fall:1}

All parameters are optional and can be specified in any order. If port is not specified, health checks will use the same port as the backend service.

Health Check Examples

# HTTP service with health check on same port
http://80:web.example.com:8080{check:/healthz}

# HTTPS service with health check on different port
https://443:api.example.com:8443{check:/status,port:9000}

# TCP service with health check parameters
tcp://3306:db.example.com:3306{check:/ping,rise:3,fall:2}

# Service with all health check parameters
http://80:app.example.com:8080{check:/health,port:8081,rise:5,fall:2}

Basic Service Examples

# Basic HTTP service
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80

# Basic HTTPS service
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443

# Basic TCP service (using SRV records)
tcp://636:my-backend.svc.my-login.us-west-1.cns.example.com

# HTTP service with health check
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80{check:/healthz}

# HTTPS service with comprehensive health check configuration
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443{check:/status,port:9000,rise:3,fall:1}

Automatic Certificate Generation

If the cloud.tritoncompute:certificate_name is provided, the instance will attempt to generate a TLS certificate using Let's Encrypt. In order for the certificate to be properly generated, you must configure the correct CNAMEs in your external DNS.

The DNS name that you want to use must be a CNAME to either the CNS instance name or the CNS service name (if the triton.cns.services tag is applied to the load balancer instance).

You must also create a CNAME for the _acme-challenge record, which must resolve to the load balancer instance's CNS instance or service name's own _acme-challenge record.

For example, to use the certificate_name of my-service.example.com with a load balancer with the tag triton.cns.services=my-clb, the following CNAME records must be created in the example.com zone.

my-service.example.com.                 IN CNAME my-clb.svc.my-login.us-central-1.triton.zone
_acme-challenge.my-service.example.com. IN CNAME _acme-challenge.my-clb.svc.my-login.us-central-1.triton.zone

Note: The CNS suffix will vary by datacenter. Additionally, not all datacenters will support using the login name. You may need to use your account UUID instead. You can use the triton instance get ... command to check the dns_names value for valid names.

If you are able to determine the proper CNAME records before provisioning, you can create them before creating the load balancer instance. If the CNAMEs are not set up correctly, an error will be logged to /var/log/triton-dehydrated.log and a certificate will not be generated.

If no certificate name is provided in the metadata, a self-signed certificate will be generated automatically.

If the value of the cloud.tritoncompute:certificate_name key changes, the load balancer will attempt to generate a new certificate with the provided name(s) and should be available within a few minutes. This can be used to dynamically change the Subject CN or add/remove SAN names (see below) as-needed, or to reconfigure an improperly configured certificate_name.

Using Subject Alternate Names (SAN)

The certificate_name may be a list of names rather than a single name. The first name will be used as the Common Name in the certificate subject. Additional names will be added as Subject Alternate Names. Cloud Load Balancer does not support serving different certificates for different services. All services must use the same certificate file.

Metrics Endpoint

If the cloud.tritoncompute:metrics_acl metadata key is not empty then the metrics endpoint will be enabled. The ACL must be an IP prefix (e.g., 198.51.100.0/24). Multiple comma or space separated prefixes can be included.

The metrics endpoint listens on port 8405 by default. This can be customized by setting the cloud.tritoncompute:metrics_port metadata key to a different port number (must be between 1-65534).

Note: The load balancer will respond to all hosts on the metrics port. Hosts outside of the configured ACL will receive a 403 response. If you want the load balancer to not respond at all then you must also configure Cloud Firewall for the instance.

Notes

Package Recommendation

You're free to use any package for deploying a load balancer. However, the memory and disk requirements are very modest. It can work quite well with as little as 128MB of RAM and requires less than 500MB of disk space. With this in mind, it is important to ensure that your load balancer is allocated adequate CPU time. It is recommended that Load Balancer instances have a cpu_cap of at least 200.

You can use CMON to monitor the performance of the load balancer and ensure that it has adequate CPU time. In particular, pay close attention to the cpucap_above_seconds_total and cpucap_waiting_threads_count values. If either of these is consistently above 0, then the load balancer needs a higher cpu_cap. Load Balancer instances can be resized live, and additional CPU capacity will be available immediately.

Tenants should pick the smallest size necessary for their needs in order to avoid incurring undue cost.

Example Load Balancer Deployment

This is an example using the triton command line client to deploy a load balancer.

$ triton create cloud-load-balancer lb1.small \
    --name my-clb-{{shortId}} \
    --network MNX-Triton-Public --network My-Fabric-Network \
    -m cloud.tritoncompute:loadbalancer=true \
    -m cloud.tritoncompute:portmap=http://80:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io,https://443:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io \
    -m cloud.tritoncompute:certificate_name=www.example.com \
    -t triton.cns.services=my-clb
    --wait

Let's look at the options used.