Triton Cloud Load Balancer
The Triton Cloud Load Balancer provides an easy to use persistent load balancer. The Cloud Load Balancer can be used in conjunction with CNS to help reduce down time and service interruptions even further.
The Triton Cloud Load Balancer Image
Unlike Triton NFS volumes, load balancer instances are entirely user-servicable and are distributed as a standard image. No direct interaction with a CLB instance is necessary. It can be entirely configured and operated via metadata. But you can also ssh directly to it and examine how it operates, or make changes as necessary.
Load Balancer Configuration
The Load Balancer is configured using metadata keys.
Key | Description |
---|---|
cloud.tritoncompute:loadbalancer |
This must be present and set to true |
cloud.tritoncompute:portmap |
This configures the listening ports to backend mapping |
cloud.tritoncompute:max_rs |
Maximum number of backend servers. Defaults to 32 |
cloud.tritoncompute:certificate_name |
Subject Common Name for generating a TLS certificate |
cloud.tritoncompute:metrics_acl |
Space or comma-separated list of IP prefixes that are allowed to access the metrics endpoint |
cloud.tritoncompute:metrics_port |
Port number for the metrics endpoint. Defaults to 8405 |
Portmap Service Designations
The cloud.tritoncompute:portmap
metadata key is a list of service designations
separated by commas or spaces.
A service designation uses the following syntax:
<type>://<listen port>:<backend name>[:<backend port>][{health check params}]
type
- Must be one ofhttp
,https
,https+insecure
,https-http
, ortcp
:http
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must not use SSL/TLS.X-Forwarded-For
header will be added to requests.https
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must use SSL/TLS. The backend certificate WILL be verified. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.https+insecure
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must use SSL/TLS. The backend certificate will NOT be verified. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.https-http
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must NOT use SSL/TLS. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.tcp
- Configures a Layer-4 proxy. The backend can use any port. If SSL/TLS is desired, the backend must configure its own certificate.
listen port
- This designates the front end listening port.backend name
- This is a DNS name that must be resolvable. This SHOULD be a CNS name (preferably a service name), but can be any fully qualified DNS domain name.backend port
- Optional. This designates the back end port that servers will be listening on. If provided, the back end will be configured to use A record lookups. If not provided, the back end will be configured to use SRV record lookup.health check params
- Optional. JSON-like syntax for configuring health checks (see Health Check Configuration section below).
Health Check Configuration
Health checks can be configured using a JSON-like syntax appended to service designations. The parameters are enclosed in curly braces {}
and use comma-separated key:value pairs.
Supported Parameters
check
- HTTP endpoint path for health checks (e.g.,/healthz
,/status
,/ping
)port
- Port number for health check requests (overrides the backend port)rise
- Number of consecutive successful checks before marking server as healthy (default: HAProxy default)fall
- Number of consecutive failed checks before marking server as unhealthy (default: HAProxy default)
Health Check Syntax
{check:/endpoint,port:9000,rise:2,fall:1}
All parameters are optional and can be specified in any order. If port
is not specified, health checks will use the same port as the backend service.
Health Check Examples
# HTTP service with health check on same port
http://80:web.example.com:8080{check:/healthz}
# HTTPS service with health check on different port
https://443:api.example.com:8443{check:/status,port:9000}
# TCP service with health check parameters
tcp://3306:db.example.com:3306{check:/ping,rise:3,fall:2}
# Service with all health check parameters
http://80:app.example.com:8080{check:/health,port:8081,rise:5,fall:2}
Basic Service Examples
# Basic HTTP service
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80
# Basic HTTPS service
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443
# Basic TCP service (using SRV records)
tcp://636:my-backend.svc.my-login.us-west-1.cns.example.com
# HTTP service with health check
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80{check:/healthz}
# HTTPS service with comprehensive health check configuration
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443{check:/status,port:9000,rise:3,fall:1}
Automatic Certificate Generation
If the cloud.tritoncompute:certificate_name
is provided, the instance will
attempt to generate a TLS certificate using Let's Encrypt. In order for the
certificate to be properly generated, you must configure the correct CNAMEs in
your external DNS.
The DNS name that you want to use must be a CNAME to either the CNS instance
name or the CNS service name (if the triton.cns.services
tag is applied to
the load balancer instance).
You must also create a CNAME for the _acme-challenge
record, which must
resolve to the load balancer instance's CNS instance or service name's own
_acme-challenge
record.
For example, to use the certificate_name
of my-service.example.com
with
a load balancer with the tag triton.cns.services=my-clb
, the following CNAME
records must be created in the example.com
zone.
my-service.example.com. IN CNAME my-clb.svc.my-login.us-central-1.triton.zone
_acme-challenge.my-service.example.com. IN CNAME _acme-challenge.my-clb.svc.my-login.us-central-1.triton.zone
Note: The CNS suffix will vary by datacenter. Additionally, not all datacenters
will support using the login name. You may need to use your account UUID
instead. You can use the triton instance get ...
command to check the
dns_names
value for valid names.
If you are able to determine the proper CNAME records before provisioning, you
can create them before creating the load balancer instance. If the CNAMEs are
not set up correctly, an error will be logged to
/var/log/triton-dehydrated.log
and a certificate will not be generated.
If no certificate name is provided in the metadata, a self-signed certificate will be generated automatically.
If the value of the cloud.tritoncompute:certificate_name
key changes, the
load balancer will attempt to generate a new certificate with the provided
name(s) and should be available within a few minutes. This can be used to
dynamically change the Subject CN or add/remove SAN names (see below) as-needed,
or to reconfigure an improperly configured certificate_name
.
Using Subject Alternate Names (SAN)
The certificate_name
may be a list of names rather than a single name. The
first name will be used as the Common Name in the certificate subject.
Additional names will be added as Subject Alternate Names. Cloud Load Balancer
does not support serving different certificates for different services. All
services must use the same certificate file.
Metrics Endpoint
If the cloud.tritoncompute:metrics_acl
metadata key is not empty then the
metrics endpoint will be enabled. The ACL must be an IP prefix
(e.g., 198.51.100.0/24
). Multiple comma or space separated prefixes can be
included.
The metrics endpoint listens on port 8405
by default. This can be customized
by setting the cloud.tritoncompute:metrics_port
metadata key to a different
port number (must be between 1-65534).
Note: The load balancer will respond to all hosts on the metrics port. Hosts
outside of the configured ACL will receive a 403
response. If you want the
load balancer to not respond at all then you must also configure Cloud Firewall
for the instance.
Notes
- Once a named certificate is used, the load balancer instance can't go back to a self-signed certificate. Continue to use the expired certificate or deploy a replacement loadbalancer.
- The maximum number of backend servers is configurable from 32 up to 1024.
- The application includes failsafes to prevent invalid configurations from being applied.
Package Recommendation
You're free to use any package for deploying a load balancer. However, the
memory and disk requirements are very modest. It can work quite well with as
little as 128MB of RAM and requires less than 500MB of disk space. With this in
mind, it is important to ensure that your load balancer is allocated adequate
CPU time. It is recommended that Load Balancer instances have a cpu_cap
of
at least 200
.
You can use CMON to monitor the performance of the load balancer and
ensure that it has adequate CPU time. In particular, pay close attention to the
cpucap_above_seconds_total
and cpucap_waiting_threads_count
values. If
either of these is consistently above 0
, then the load balancer needs a higher
cpu_cap
. Load Balancer instances can be resized live, and additional CPU
capacity will be available immediately.
Tenants should pick the smallest size necessary for their needs in order to avoid incurring undue cost.
Example Load Balancer Deployment
This is an example using the triton
command line client to deploy a load
balancer.
$ triton create cloud-load-balancer lb1.small \
--name my-clb-{{shortId}} \
--network MNX-Triton-Public --network My-Fabric-Network \
-m cloud.tritoncompute:loadbalancer=true \
-m cloud.tritoncompute:portmap=http://80:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io,https://443:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io \
-m cloud.tritoncompute:certificate_name=www.example.com \
-t triton.cns.services=my-clb
--wait
Let's look at the options used.
triton create cloud-load-balancer
- This is the create command with the image name.lb1.small
- This is an example package name. Checktrtion packages
for available packages in the datacenter you are using.--name my-clb-{{shortId}}
- This is the name of the instance. Here, we've supplemented the name with the magic string{{shortId}}
. This string will be replaced with the instance short ID. E.g.,my-clb-15493821
.--network MNX-Triton-Public --network My-Fabric-Network
- We've assigned an external network pool (Triton-MNX-Public
) which will be used for the instance's external IP, andMy-Fabric-Network
. The instances we want to provide services for will be deployed to this network. Note: It is not required for front end and back end services to be on separate networks.-m cloud.tritoncompute:loadbalancer=true
- This designates confirmation that this instance is intended for load balancing.-m cloud.tritoncompute:portmap=<...>
- This is the portmap service designation.-m cloud.tritoncompute:certificate_name=<...>
- For this instance, we want to generate a certificate forwww.example.com
. This certificate will be generated using Let's Encrypt, and will be rotated aproximately every 60-90 days. In order for this certificate to be generated properly, you must ensure that your DNS has the proper CNAMEs configured.-t triton.cns.services=my-clb
- This creates a CNS service group for the load balancer. This isn't strictly necessary, but can be used to have a single DNS name that points to multiple load balancer instances. Load balancers in the same service group should have the same metadata configuration.--wait
- This indicates to thetriton
command that it should block until the instance has finished provisioning.