Triton Cloud Load Balancer
The Triton Cloud Load Balancer provides an easy to use persistent load balancer. The Cloud Load Balancer can be used in conjunction with CNS to help reduce down time and service interruptions even further.
The Triton Cloud Load Balancer Image
Unlike Triton NFS volumes, load balancer instances are entirely user-servicable and are distributed as a standard image. No direct interaction with a CLB instance is necessary. It can be entirely configured and operated via metadata. But you can also ssh directly to it and examine how it operates, or make changes as necessary.
Load Balancer Configuration
The Load Balancer is configured using metadata keys.
Key | Description |
---|---|
cloud.tritoncompute:loadbalancer |
This must be present and set to true |
cloud.tritoncompute:portmap |
This configures the listening ports to backend mapping |
cloud.tritoncompute:max_rs |
Maximum number of backend servers. Defaults to 32 |
cloud.tritoncompute:certificate_name |
Subject Common Name for generating a TLS certificate |
Portmap Service Designations
The cloud.tritoncompute:portmap
metadata key is a list of service designations
separated by commas or spaces.
A service designation uses the following syntax:
<type>://<listen port>:<backend name>[:<backend port>]
type
- Must be one ofhttp
,https
, ortcp
.http
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must not use SSL/TLS.X-Forwarded-For
header will be added to requests.https
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must use SSL/TLS. The backend certficate will not be verified. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.tcp
- Configures a Layer-4 proxy. The backend can use any port. If SSL/TLS is desired, the backend must configure its own certificate.
listen port
- This designates the front end listening port.backend name
- This is a DNS name that must be resolvable. This SHOULD be a CNS name (preferably a service name), but can be any fully qualified DNS domain name.backend port
- Optional. This designates the back end port that servers will be listening on. If provided, the back end will be configured to use A record lookups. If not provided then the back end will be configured to use SRV record lookup.
Examples:
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443
tcp://636:my-backend.svc.my-login.us-west-1.cns.example.com
Automatic Certificate Generation
If the cloud.tritoncompute:certificate_name
is provided, the instance will
attempt to generate a TLS certificate using Let's Encrypt. In order for the
certificate to be properly generated, you must configure the correct CNAMEs in
your external DNS.
The DNS name that you want to use must be a CNAME to either the CNS instance
name or the CNS service name (if the triton.cns.services
tag is applied to
the load balancer instance).
You must also create a CNAME for the _acme-challenge
record, which must
resolve to the load balancer instance's CNS instance or service name's own
_acme-challenge
record.
For example, to use the certificate_name
of my-service.example.com
with
a load balancer with the tag triton.cns.services=my-clb
, the following CNAME
records must be created in the example.com
zone.
my-service.example.com. IN CNAME my-clb.svc.my-login.us-central-1.triton.zone
_acme-challenge.my-service.example.com. IN CNAME _acme-challenge.my-clb.svc.my-login.us-central-1.triton.zone
Note: The CNS suffix will vary by datacenter. Additionally, not all datacenters
will support using the login name. You may need to use your account UUID
instead. You can use the triton instance get ...
command to check the
dns_names
value for valid names.
If you are able to determine the proper CNAME records before provisioning, you
can create them before creating the load balancer instance. If the CNAMEs are
not set up correctly, an error will be logged to
/var/log/triton-dehydrated.log
and a certificate will not be generated.
If the value of the cloud.tritoncompute:certificate_name
key changes, the
load balancer will attempt to generate a new certificate with the provided
name(s) and should be available within a few minutes. This can be used to
dynamically change the Subject CN or add/remove SAN names (see below) as-needed,
or to reconfigure an improperly configured certificate_name
.
Using Subject Alternate Names (SAN)
The certificate_name
may be a list of names rather than a single name. The
first name will be used as the Common Name in the certificate subject.
Additional names will be added as Subject Alternate Names. Cloud Load Balancer
does not support serving different certificates for different services. All
services must use the same certificate file.
Package Recommendation
You're free to use any package for deploying a load balancer. However, the
memory and disk requirements are very modest. It can work quite well with as
little as 128MB of RAM and requires less than 500MB of disk space. With this in
mind, it is important to ensure that your load balancer is allocated adequate
CPU time. It is recommended that Load Balancer instances have a cpu_cap
of
at least 200
.
You can use CMON to monitor the performance of the load balancer and
ensure that it has adequate CPU time. In particular, pay close attention to the
cpucap_above_seconds_total
and cpucap_waiting_threads_count
values. If
either of these is consistently above 0
, then the load balancer needs a higher
cpu_cap
. Load Balancer instances can be resized live, and additional CPU
capacity will be available immediately.
Tenants should pick the smallest size necessary for their needs in order to avoid incurring undue cost.
Example Load Balancer Deployment
This is an example using the triton
command line client to deploy a load
balancer.
$ triton create cloud-load-balancer lb1.small \
--name my-clb-{{shortId}} \
--network MNX-Triton-Public --network My-Fabric-Network \
-m cloud.tritoncompute:loadbalancer=true \
-m cloud.tritoncompute:portmap=http://80:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io,https://443:my-backend.svc.my-login.cns.us-central-1.cns.mnx.io \
-m cloud.tritoncompute:certificate_name=www.example.com \
-t triton.cns.services=my-clb
--wait
Let's look at the options used.
triton create cloud-load-balancer
- This is the create command with the image name.lb1.small
- This is an example package name. Checktrtion packages
for available packages in the datacenter you are using.--name my-clb-{{shortId}}
- This is the name of the instance. Here, we've supplemented the name with the magic string{{shortId}}
. This string will be replaced with the instance short ID. E.g.,my-clb-15493821
.--network MNX-Triton-Public --network My-Fabric-Network
- We've assigned an external network pool (Triton-MNX-Public
) which will be used for the instance's external IP, andMy-Fabric-Network
. The instances we want to provide services for will be deployed to this network. Note: It is not required for front end and back end services to be on separate networks.-m cloud.tritoncompute:loadbalancer=true
- This designates confirmation that this instance is intended for load balancing.-m cloud.tritoncompute:portmap=<...>
- This is the portmap service designation.-m cloud.tritoncompute:certificate_name=<...>
- For this instance, we want to generate a certificate forwww.example.com
. This certificate will be generated using Let's Encrypt, and will be rotated aproximately every 60-90 days. In order for this certificate to be generated properly, you must ensure that your DNS has the proper CNAMEs configured.-t triton.cns.services=my-clb
- This creates a CNS service group for the load balancer. This isn't strictly necessary, but can be used to have a single DNS name that points to multiple load balancer instances. Load balancers in the same service group should have the same metadata configuration.--wait
- This indicates to thetriton
command that it should block until the instance has finished provisioning.