Frequently Asked Questions
Can I use it for load balancing or scaling?
Yes.
Every instance that shares the same tag is addressable using the same domain name. This makes discovery of those instances easy, and the domain name for those instances won’t change as you scale the number of instances up and down. See the global DNS example for more information.
While Triton CNS makes it easy to support scaling applications, it does not trigger the automatic scaling of applications.
Is it HA?
Triton CNS’s DNS systems are designed for high availability. In Triton Compute Service, the CNS replicas are deployed across multiple availability zones in the US and Europe to ensure name resolution will survive even failures of multiple entire data centers. The operator guide will include details for private cloud users to achieve high availability as well.
Further, Triton CNS can be used as a global DNS solution in support of highly available applications. Please see further details about when containers appear in DNS, marking instances for maintenance, and caveats for further details.
When do containers appear in DNS?
Triton Container Name Service is designed for maximum convenience and ease of use. To support that, nodes are considered “healthy” at an infrastructure level. Newly provisioned instances will appear in DNS when the Triton infrastructure marks them as “running” and will be removed from DNS if the infrastructure detects that they have stopped or are restarting.
The following list of conditions defines when an instance, whether it be a container or VM, is reported in DNS. The steps apply strictly in this order (so lower down the list is higher priority):
- By default, instances are enabled for CNS
- Instances belonging to users without the
triton_cns_enabled
flag and users who are not approved for provisioning are disabled - Instances that have the
triton.cns.disable
tag set to non-false
are disabled - Instances that are marked as destroyed are disabled
All instances that are disabled by these steps will neither be listed in individual instance records, nor service records. Instances that are enabled by these steps are always listed in instance records.
Additional logic applies to decide whether instances are listed in service records:
- Instances are by default listed in all services contained in their
triton.cns.services
tag - Instances with the
triton.cns.status
metadata set to non-up
are taken out of services - Instances on a compute node that is not
running
, has not answered a heartbeat in 60 seconds, or has booted up in the last 120 seconds, are taken out of services - Instances with a status other than
running
are taken out of services
What happens when a container stops?
Stopped containers will be removed from service address records within seconds, but the TTL for address records cached downstream may continue to drive traffic to the stopped instance. Some clients, including some web browsers, will automatically retry requests with all the IPs in a given address record, and these clients will not experience any downtime. If a container restarts on Triton, it will restart with the same IP address, further minimizing any downtime that may be experienced.
In a scenario with poorly behaved clients that don’t re-try different IPs when making requests to an address record with multiple IPs for an application with five instances, if one of those instances stops, then it’s possible that 20% of traffic to the DNS name from those old clients will be directed to the stopped instance until the TTL expires.
Is it possible to mark an instance for maintenance?
Yes.
Triton users can tell Triton CNS to take an instance out of all service records in DNS, making it possible to do planned changes with no loss of traffic. By removing containers from DNS before stopping them, they have time to complete any requests that are in progress or that they may receive during the DNS TTL.
Inside Docker containers and container-native Linux infrastructure containers, use the following command:
# /native/usr/sbin/mdata-put triton.cns.status down
SmartOS infrastructure containers or hardware virtual machines:
# mdata-put triton.cns.status down
And DNS can be turned back on for the instance by either setting triton.cns.status
to up
or deleting the metadata key entirely.
# mdata-put triton.cns.status up
You can also mark an instance for maintenance from outside the instance. See the Triton CLI instructions to turn CNS off for a specific instance, below.
Can I configure my own health checks?
Yes.
Code inside an instance can monitor the health of services provided by the instance and remove it from DNS if it detects those services are unhealthy. See marking instances for maintenance. However, Triton CNS cannot detect the failure of the healthcheck itself, and instances will remain healthy by default. Please see additional caveats.
Are there any caveats about using Triton CNS that I should be aware of?
Triton CNS will report instances in DNS that are in a “running” state and not explicitly marked for maintenance. Because instances are considered “healthy” by default, they may be discoverable in DNS even if the service(s) they provide are not operating correctly.
Further, Triton CNS is designed to support global DNS services with DNS TTL times that work well for discovery on the larger internet, but are longer than are likely acceptable for connections between components within a data center.
Because of those limitations, DNS-based discovery is not ideal between components of an application, such as from the application to its database, between a front-end proxy and the application. Alternative discovery mechanisms, such as those using Consul, HAProxy, NGNIX or other similar tools may be a better fit for supporting connections between application components inside a single data center.
Can I use my own domain name with Triton CNS?
Yes.
A common usage is to point a CNAME to Triton CNS in your normal DNS provider (see example), but it is also possible to add a DNAME entry that will map all names within the hierarchy of the DNAME to the specified Triton CNS UUID and data center. In other words, a CNAME is for individual host records while a DNAME is for an entire domain name sub-tree.
For example, adding the following DNAME records to the DNS zone example.net
.
inst.us-sw-1.triton IN DNAME inst.<account uuid>.<data center name>.triton.zone.
svc.us-sw-1.triton IN DNAME svc.<account uuid>.<data center name>.triton.zone.
Note: The final dot is required for adding CNAME and DNAME records to your own DNS domain.
Then DNS queries for <name>.inst.us-sw-1.triton.example.net
will return the address records associated with <name>.inst.<account_uuid>.us-sw-1.triton.zone
.
$ dig +noall +nocomments +question +answer example-instance.inst.us-sw-1.triton.example.net
;example-instance.inst.us-sw-1.triton.example.net. IN A
us-sw-1.triton.example.net. 3333 IN DNAME <account_uuid>.us-sw-1.triton.zone.
example-instance.inst.us-sw-1.triton.example.net. 3333 IN CNAME example-instance.inst.<account_uuid>.us-sw-1.triton.zone.
example-instance.inst.<account_uuid>.us-sw-1.triton.zone. 23 IN A 165.225.170.213
Watch a screencast to learn how to add a custom domain to your application.
How do I set my resolvers to use Triton CNS?
Triton CNS is a public nameserver and the DNS names it generates are resolvable on the public internet by any working nameserver with recursive resolution. There is no need to set specific resolv.conf
entries or take other steps to take advantage of Triton CNS' name servers.
What types of compute instances does it work with?
Triton CNS works with all three types of compute infrastructure Triton supports:
- Hardware virtual machines, which can run a wide variety of applications, including those that require a separate kernel.
- Infrastructure containers, which work like bare metal virtual machines, running a Linux distro of choice or SmartOS.
- Docker containers running securely on bare metal, running the Docker image of your choice.
Learn more about MNX.io's compute offerings, about containers on Triton, and bare metal container security.
Can I use it with existing instances?
Yes.
Simply add the triton.cns.services
tag using CloudAPI in a data center enabled for Triton CNS and Triton will generate DNS records for the compute instance.
How do I turn it on or off?
In the Triton Compute Service portal, you can update the account settings to turn Triton CNS on and off at will.
Users can also turn Triton CNS on and off using the raw CloudAPI interactions or via the new Triton CLI tool. See the section below for complete details about controlling and managing Triton CNS using the Triton CLI tool.
Can I turn it off?
Yes.
Triton CNS is an optional feature. Users may turn it on and off as they choose.
You can also disable CNS just for individual instances in your account, even if you have enabled it for all others, by setting the triton.cns.disable
tag for that instance (also see marking nodes for maintenance). This tag overrules all other rules in the CNS engine, guaranteeing that instance will not be listed in CNS for any reason.
Does it interfere with DNS I’ve implemented in my application?
No.
Triton CNS is designed so that it will not interfere with existing or future DNS solutions you may choose to use.
- Triton CNS uses a very specific FQDN that is unlikely to interfere or overlap with other DNS solutions.
- It’s an optional feature that can be turned on and off as desired.
Does Triton CNS serve PTR records or reverse DNS?
Not yet.
Is Triton CNS IPv6 compatible?
Yes. Triton CNS is one of a number of Triton components that is IPv6-ready. Triton CNS can serve both A and AAAA records (including AAAA records for statically assigned IPv6 addresses). We are committed to full IPv6 compatibility for all Triton components. In true open source fashion, the work plan for that is published on Github and we welcome pull requests to speed up that process.
Does it work with Triton in my data center?
Yes.
Triton CNS is built for easy use in private data centers. Operators can select their preferred base domain names for internal and external use. CNS is also built to be able to integrate with your existing DNS infrastructure for Private Cloud use, including full support for secondary nameservers running ISC BIND and other 3rd-party software.
Existing Triton DataCenter installations can be upgraded to support Triton CNS. Please contact your support representative for details and requirements for this upgrade. Full documentation for deploying and managing Triton CNS in private data centers will be available with the general availability launch.
Does it work in Triton Compute Service?
Yes.
Is it open source?
Yes.
Triton CNS is an integral part of Triton DataCenter (formerly called SmartDataCenter). Both Triton CNS and Triton are MPLv2-licensed projects available in Github:
Triton CNS was discussed and developed with participation and support from the Triton open source community on the sdc-discuss mail list and it was the launch project for our formal RFD process: RFD1.
What if I need a solution with more features than I can get with Triton CNS?
Triton CNS is designed to fit the majority of use cases with great simplicity at no added cost, but it’s not perfect for every use case. For those situations, please consider using load balancer technologies such as Nginx, Docker, Node.js or 3rd party applications such as Pulse Secure.
Triton CNS compared to ELB, Elastic IPs, and Brocade VTM (Steelapp)
Triton CNS | ELB | Elastic IP | Brocade Virtual Traffic Manager (Steelapp) | |
---|---|---|---|---|
Automatic setup and configuration | Yes | No | No | No |
Offers consistent addressing of changing infrastructure | Yes | Yes | Yes | Yes |
Makes load balancing easy | Yes | Yes | No | Yes |
Makes scaling easy | Yes | Yes | No | Yes |
Triggers autoscaling | No | No | No | Yes |
Active/passive clustering on the same IP for maximum availability | No | No | No | Yes |
Supports SSL termination | N/A | Yes | N/A | Yes |
Responds quickly to infrastructure changes | Yes, within seconds | Only if part of an autoscaling group | No, changes take minutes to propagate (5-10 is common) | Yes |
Added latency layers | None | Proxy + NAT | NAT | Proxy |
Integrated with Triton for private clouds | Yes | No | No | No |
Open source | Yes (MPLv2) | No | No | No |
Price | Free | Not free | Not free | Not free |
Triton CNS compared to Docker link and Docker’s embedded DNS
Triton CNS can be used in a similar way to --link
in Docker, but can be used to connect any type of Triton infrastructure, including Docker containers, infrastructure containers, and VMs. Additionally, some may prefer Triton CNS’ ability to set service names separately from container names, and appreciate Triton CNS as a universal DNS that connects all instances in their account.
An advantage of Triton CNS over --link
is that DNS lookups for a service will always return the current set of instances registered for that service, even if those instances have changed over time. Docker's --link
feature, however, does not respond to changes in instances, making it difficult to scale services or replace instances over the life of an application.
The implementation of --link
in Docker on Triton differs from behavior in Docker in that it has always supported connections between containers running on different compute nodes. That feature will continue to work without any change in behavior or implementation on Triton. It should be noted that Docker Inc. has recently changed the implementation and behavior of --link
in the Docker daemon, but those changes do not affect the implementation or behavior of --link
on Triton DataCenter. However, --link
in Docker (on Triton or elsewhere) cannot link Docker containers to infrastructure containers or VMs, limiting its usefulness in applications that are not fully Dockerized.
Triton CNS is completely optional, however, and can be turned off if it is undesirable for any reason.
How can I find my account UUID?
See the usage instructions with the Triton CLI tool to find the account UUID.