Triton CLI tool
The Triton command line tool (triton
) uses CloudAPI to manage infrastructure in Triton data centers. Many of the tasks that you can perform through the portal are also possible with Triton CLI, including:
- Provision compute instances, including infrastructure containers and hardware virtual machines
- Manage compute instances, including Docker containers, infrastructure containers and hardware virtual machines
- Provision and manage networks
- Manage your account
In this page, you will learn how to install triton
. You can learn more about CloudAPI methods and resources in our additional reference documentation.
Need a visual reference? Watch the screencast which covers how to install the Triton CLI tool and manage infrastructure in Triton data centers.
Installation
To install Triton CLI and CloudAPI tools, you must first install Node.js. You can find the latest version of Node.js for your operating system and architecture at nodejs.org.
You can install triton
using Node Package Manager (npm
). In the following example, the npm
install command sets a global flag (-g
) that makes npm
modules accessible to all users. This flag is mandatory when installing triton
on Windows.
$ sudo npm install -g triton
. . .
/usr/local/bin/triton -> /usr/local/lib/node_modules/triton/bin/triton
triton@4.11.0 /usr/local/lib/node_modules/triton
├── bigspinner@3.1.0
├── assert-plus@0.2.0
├── extsprintf@1.0.2
├── wordwrap@1.0.0
├── strsplit@1.0.0
├── node-uuid@1.4.3
├── read@1.0.7 (mute-stream@0.0.6)
├── semver@5.1.0
├── vasync@1.6.3
├── once@1.3.2 (wrappy@1.0.2)
├── backoff@2.4.1 (precond@0.2.3)
├── verror@1.6.0 (extsprintf@1.2.0)
├── which@1.2.4 (isexe@1.1.2, is-absolute@0.1.7)
├── cmdln@3.5.4 (extsprintf@1.3.0, dashdash@1.13.1)
├── lomstream@1.1.0 (assert-plus@0.1.5, extsprintf@1.3.0, vstream@0.1.0)
├── mkdirp@0.5.1 (minimist@0.0.8)
├── sshpk@1.7.4 (ecc-jsbn@0.1.1, jsbn@0.1.0, asn1@0.2.3, jodid25519@1.0.2, dashdash@1.13.1, tweetnacl@0.14.3)
├── rimraf@2.4.4 (glob@5.0.15)
├── tabula@1.7.0 (assert-plus@0.1.5, dashdash@1.13.1, lstream@0.0.4)
├── smartdc-auth@2.3.0 (assert-plus@0.1.2, once@1.3.0, clone@0.1.5, dashdash@1.10.1, sshpk@1.7.1, sshpk-agent@1.2.0, vasync@1.4.3, http-signature@1.1.1)
├── restify-errors@3.0.0 (assert-plus@0.1.5, lodash@3.10.1)
├── bunyan@1.5.1 (safe-json-stringify@1.0.3, mv@2.1.1, dtrace-provider@0.6.0)
└── restify-clients@1.1.0 (assert-plus@0.1.5, tunnel-agent@0.4.3, keep-alive-agent@0.0.1, lru-cache@2.7.3, mime@1.3.4, lodash@3.10.1, restify-errors@4.2.3, dtrace-provider@0.6.0)
NOTE: on some platforms and for some installations of Node.js, you may receive an error when using sudo
. Remove it from the command to install triton
:
$ npm install -g triton
Environment variables
We recommend setting up environment variables to populate an initial environment-based profile. These environment variables are useful for interacting with CloudAPI as well as other tools such as Packer and Terraform.
On Windows, this step is mandatory.
Add environment variables on macOS
Environment variables live in your bash profile, .bash_profile
. On macOS, this file should be in your home directory.
From the terminal, edit your bash profile:
$ vi .bash_profile
Add the following content to your profile, modified to include the correct Triton username and SSH key if not id_rsa.pub
:
export TRITON_PROFILE="env"
export TRITON_URL="https://us-central-1.api.mnx.io"
export TRITON_ACCOUNT="<TRITON_USERNAME>"
unset TRITON_USER
export TRITON_KEY_ID="$(ssh-keygen -l -f $HOME/.ssh/id_rsa.pub | awk '{print $2}')"
unset TRITON_TESTING
unset TRITON_PROFILE
Previous versions of triton
required environment variables that began with SDC_*
, i.e. SDC_URL
and SDC_ACCOUNT
. As of January 2018, the SDC environment variables are still supported by triton
.
If you've installed smartdc
, you must set up SDC_*
environment variables.
Configuring Triton profiles
Triton CLI uses "profiles" to store access information. Profiles contain the data center CloudAPI URL, your login name, and SSH key fingerprint. You can create a profile for each data center or profiles for different users. Profiles make it easy to connect to different data centers, or connect to the same data center as different users.
The triton profile create
command prompts you to answer a series of questions to configure your profile. The following example shows the steps for Triton user jill
.
$ triton profile create
A profile name. A short string to identify a CloudAPI endpoint to the `triton` CLI.
name: us-central-1
The CloudAPI endpoint URL.
url: https://us-central-1.api.mnx.io
Your account login name.
account: jill
The fingerprint of the SSH key you have registered for your account. You may enter a local path to a public or private key to have the fingerprint calculated for you.
keyId: ~/.ssh/<ssh key name>.id_rsa
Fingerprint: 2e:c9:f9:89:ec:78:04:5d:ff:fd:74:88:f3:a5:18:a5
Saved profile "us-central-1.api.mnx.io
Select a CloudAPI endpoint URL from any of our global data centers, or use a Triton-powered data center of your own (remember: it's open source).
To test the installation and configuration, let's use triton info
:
$ triton info
login: jill
name: Jill Example
email: jill@example.com
url: https://us-central-1.api.mnx.io
totalDisk: 65.8 GiB
totalMemory: 2.0 GiB
instances: 2
running: 2
The triton info
output above shows that Jill's account already has two instances running.
Using profiles
You can view all configured profiles with the triton profiles
command:
$ triton profiles
NAME CURR ACCOUNT USER URL
env jill - https://us-central-1.api.mnx.io
us-central-1 * jill - https://us-central-1.api.mnx.io
Next let's make a profile for each data center. To do this we will use triton
commands to make a copy of the us-central-1
profile for each of the data center urls. Copy this snippet below to add the new profiles (in this case, based on a profile named 'env'):
triton datacenters | egrep -v NAME | while read -r i; do name=$(echo $i | awk {'print $1'}); url=$(echo $i | awk {'print $2'}); triton profile get -j env | sed -e "s/env/$name/" -e "s#http[^\"]*#$url#" | triton profile create -f - -y; done
Run triton profiles
again to check to see that it worked. We should have a new profile for each data center listed in triton datacenters
:
$ triton profiles
NAME CURR ACCOUNT USER URL
env jill - https://us-central-1.api.mnx.io
eu-ams-1 jill - https://eu-ams-1.api.mnx.io
us-east-1 jill - https://us-east-1.api.mnx.io
us-east-2 jill - https://us-east-2.api.mnx.io
us-east-3 jill - https://us-east-3.api.mnx.io
us-central-1 * jill - https://us-central-1.api.mnx.io
us-west-1 jill - https://us-west-1.api.mnx.io
You can change the default profile with the triton profile set
command:
$ triton profile set us-east-1
Set "us-east-1" as current profile
Completions
You can also configure bash completions with this command:
# Mac OSX
$ triton completion > /usr/local/etc/bash_completion.d/triton
# Linux
$ triton completion > /etc/bash_completion.d/triton
# Windows bash shell
$ triton completion >> ~/.bash_completion
Quick start: create an instance
With triton
installed and configured, we can jump right into provisioning instances. Here's an example of provisioning an infrastructure container running Ubuntu. Think of infrastructure containers like virtual machines, only faster and more efficient.
Let's run triton instance create
and we'll talk about the pieces after:
$ triton instance create -w --name=server-1 ubuntu-14.04 g4-highcpu-1G
Creating instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4, ubuntu-14.04@20160114.5, g4-highcpu-1G)
Created instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4) in 22s
Now that we have an instance, we can run triton ssh
to connect to it. This is an awesome addition to our tools because it means that we don't need to copy SSH keys or even lookup the IP address of the instance.
$ triton ssh server-1
Welcome to Ubuntu 14.04 (GNU/Linux 3.19.0 x86_64)
* Documentation: https://help.ubuntu.com/
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
__ . .
_| |_ | .-. . . .-. :--. |-
|_ _| ;| || |(.-' | | |
|__| `--' `-' `;-| `-' ' ' `-'
/ ; Instance (Ubuntu 14.04 20151105)
`-' https://docs.tritondatacenter.com/images/container-native-linux
root@8367b339-799b-cff5-a662-a211e1927797:~#
Instance creation options and details
In our quick start example, we ran triton instance create -w --name=server-1 ubuntu-14.04 g4-highcpu-1G
. That command has three parameters:
- We gave our instance a name using
--name server-1
- We used
-w
to wait the instance to be created - We used
ubuntu-14.04
as our image - We set
g4-highcpu-1G
as our package
Let's look at each of those in detail to see how you can set the options that will work best for your needs.
Specifying the instance name
Names for instances can be up to 189 characters and include any alphanumeric character plus _
, -
, and .
Selecting an image
Finding our Ubuntu image is pretty easy. We use triton images
to list the images and add name=~ubuntu
to do a substring search for Ubuntu. It's sorted by published date so usually we'll pick the most recent. Today we'll choose 14.04 because it has wider support.
$ triton images name=~ubuntu type=lx-dataset
SHORTID NAME VERSION FLAGS OS TYPE PUBDATE
...
c8d68a9e ubuntu-14.04 20150819 P linux lx-dataset 2015-08-19
52be84d0 ubuntu-14.04 20151005 P linux lx-dataset 2015-10-05
ffe82a0a ubuntu-15.04 20151105 P linux lx-dataset 2015-11-05
Note: Want to build a custom application using our infrastructure containers? Learn how to create a custom infrastructure image.
Selecting a package
There are 4 types of packages available for containers: compute optimized (g4-highcpu-<size>
), general purpose (g4-general-<size>
), memory optimized (g4-highram-<size>
), and storage optimized (g4-fastdisk-<size>
and g4-bigdisk-<size>
).
The package types for HVMs have a similar name structure: compute optimized (k4-highcpu-<kvm|bhyve>-<size>
), general purpose (k4-general-<kvm|bhyve>-<size>
), memory optimized (k4-highram-<kvm|bhyve>-<size>
), and storage optimized (k4-fastdisk-<kvm|bhyve>-<size>
and k4-bigdisk-<kvm|bhyve>-<size>
).
We'll use triton package
to search for a package with 1 gigabyte of RAM. We'll pick the g4-highcpu-1G
.
$ triton packages memory=1024
SHORTID NAME MEMORY SWAP DISK VCPUS
14af2214 g4-highcpu-1G 1G 4G 25G -
At this time, there is no way to use triton
to fetch the pricing details for the different packages. To find out those costs, you can visit our public pricing page.
Watch to learn
This screencast covers how to install the Triton CLI tool and use CloudAPI to manage infrastructure in Triton data centers.
If you skipped ahead to the video, you can go back and review the installation process for step-by-step instructions.
Bootstrapping an instance with a script
Our quick start example didn't include one of the most useful options for automating infrastructure on Triton: specifying a script for containers to run at startup.
We'll show how to use triton
to run the examples from Casey's blog post on setting up Couchbase in infrastructure containers. I only want to show what the equivalent triton
commands look like. We'll skip over the details, but you can read the original post to learn more.
The command below sets up a 16GB CentOS infrastructure container, and installs Couchbase. The --script
file installs Couchbase, and the triton ssh
runs cat /root/couchbase.txt
to show the address of the Couchbase dashboard.
curl -sL -o couchbase-install-triton-centos.bash https://raw.githubusercontent.com/misterbisson/couchbase-benchmark/master/bin/install-triton-centos.bash
triton instance create \
--name=couch-bench-1 \
$(triton images name=~centos-6 type=lx-dataset -Ho id | tail -1) \
'Large 16GB' \
--wait \
--script=./couchbase-install-triton-centos.bash
triton ssh couch-bench-1 'cat /root/couchbase.txt'
Infrastructure management isn't just about creating instances. Triton CLI offers some of its biggest improvements in this space. Below are some examples of triton
commands.
List instances
$ triton instances
SHORTID NAME IMG STATE PRIMARYIP AGO
1fdc4b78 couch-bench-1 8a1dbc62 running 165.225.136.140 3m
8367b039 server-1 ubuntu-14.04@20151005 running 165.225.122.69 3m
Wait for tasks
By default the triton
tool does not wait for tasks to finish. This is great because it means that your commands return control back to you very quickly. However sometimes you'll need to wait for a task to complete before you do the next one. When this happens you can wait by using either the --wait
or -w
flags, or the triton instance wait
command. In the example above we used --wait
so that the instance would be ready by the time the triton ssh
command ran.
Show instance details
Use triton instance get -j
to view your instance's details as a JSON blob. To parse fields out of the blob, I recommend using json although there are many other great tools out there.
$ triton instance get -j couch-bench-1
{
"id": "1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a",
"name": "couch-bench-1",
"type": "smartmachine",
"state": "running",
"image": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",
"ips": [
"165.225.136.140",
"10.112.2.230"
],
"memory": 16384,
"disk": 409600,
"metadata": {
"user-script": "#!/bin/bash\n...\n\n",
"root_authorized_keys": "ssh-rsa ..."
},
"tags": {},
"created": "2015-12-18T03:44:42.314Z",
"updated": "2015-12-18T03:45:10.000Z",
"networks": [
"65ae3604-7c5c-4255-9c9f-6248e5d78900",
"56f0fd52-4df1-49bd-af0c-81c717ea8bce"
],
"dataset": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",
"primaryIp": "165.225.136.140",
"firewall_enabled": false,
"compute_node": "44454c4c-4400-1059-804e-b5c04f383432",
"package": "g4-general-16G"
}
Up above you can see that the user-script
that we ran is part of the metadata.
You can pull out individual values by piping the output to json KEYNAME
. For example you could get the IP address of an instance like this:
$ triton instance get -j couch-bench-1 | json primaryIp
165.225.136.140
Clean up
Let's wrap up with this container. We'll delete it using the triton instance delete
command:
$ triton instance delete server-1 couch-bench-1
Delete (async) instance server-1 (8367b039-759b-c6f5-a6c2-a210e1926798)
Delete (async) instance couch-bench-1 (1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a)
For something a bit more dangerous, you can delete all your instances using this command:
$ triton instance delete $(triton instances -Ho shortid)
Be careful. Using the triton instance delete
command removes all of your instances regardless of whether they running or stopped.
If you are familiar with using docker
, note that this is equivalent to using docker rm -f $(docker ps -aq)
to force the deletion of all your containers. If you want to remove all of your instances, using triton instance delete
might be faster since it deletes the instances in parallel.
CloudAPI and Triton Elastic Docker Host
In addition to CloudAPI and the Triton CLI tool, you can also create and manage bare metal Docker containers on Triton using the Triton Elastic Docker Host and Triton-Docker CLI tools. The two APIs work in parallel, though the Triton-Docker CLI can only create and manage bare metal Docker containers on Triton. CloudAPI and the Triton CLI tool can manage almost every aspect of Docker containers with the exception of provisioning bare metal Docker containers on Triton.
See our comparison table for full details.
What are my next steps?
Once you've installed CloudAPI and triton
:
- Read the CloudAPI reference documentation
- If you haven't already, install Triton-Docker CLI
- Create your first infrastructure container or hardware virtual machine