Installing Triton DataCenter on Equinix Metal
Triton offers a streamlined, fully automated installation experience using Equinix Metal.
Requirements
You will need:
- A copy of the automation script
- An Equinix Metal account
- The
packet-cli
tool installed and configured with your authentication credentials.
Installing Triton
Create a Project
The first step in creating a Triton Cloud on Equinix Metal is to define a project. Your project will contain all of the resources allocated to your Triton Cloud.
To create a project, you will need to decide on a name.
triton-eqm-create.sh project -n My-Triton-Project
You may wish to extend your Triton Cloud to multiple data centers. If you do, you can choose to keep all datacenters in the same project or use separate projects for each project.
Create the headnode
The next step is to create the Triton headnode server. To create the headnode you will need the ID of the project you created, and to choos a facility, or datacenter location.
To list projects with packet:
packet project get
Ultimately the facility will need to meet your requirements. It's important to consider the geographic region, and the available capacity.
The following commands will list the available facilities, and avalable capacity by facility.
packet facility get
packet capacity get
Most information needed for the installation is obtained directly from the
Equinix Metal metadata service. Howerver, the metadata doesn't provide all
information that will be used by Triton. You can provide a supplemental JSON
file with additional data. The format of this file is the same as the
answers.json
used by Triton, and supports a subset of keys.
Supplemental answers are not required, and if not included, suitable defaults
will be chosen. The following table shows the supported keys and the default
if the key is not provided.
Key | Default |
---|---|
company_name |
Empty string |
datacenter_location |
Empty string |
region_name |
Leading alpha characters of datacenter_name , which will be the Equnix Metal facility (e.g., iad ) |
dns_resolver1 |
8.8.8.8 |
dns_resolver2 |
8.8.4.4 |
dns_domain |
example.com |
mail_to |
root@localhost |
mail_from |
support@ |
ntp_host |
0.smartos.pool.ntp.org |
root_password |
Randomly generated (you will need to use your Equinix Metal SSH keys to log in) |
admin_password |
Randomly generated. You can find this password in /usbkey/config |
update_channel |
release |
Note: If you do not supply a root_password
or admin_password
, one will
be randomly generated. In this case, you will only be able to ssh to the
headnode using the SSH keys in your Equinix Metal account. If you provide
neither SSH keys nor a root_password
, you will not be able to log into the
headnode. If you do not supply the admin_password
you will need to change it
before you can log into AdminUI. Use the sdc-useradm
cli tool on the
headnode to do this.
Here's an example JSON file that provides the supplemental data used by the installer.
{
"company_name": "Weyland-Yutani",
"region_name": "us-west",
"datacenter_location": "Sillicon Valley, California",
"dns_domain": "example.com",
"mail_to": "root@example.com",
"mail_from": "support@example.com",
"root_password": "change this to something else",
"admin_password": "seriously this is not a good password"
}
Note: Keys not provided in this file will use the defaults listed in the table above.
To create the headnode, use the following example:
triton-eqm-create.sh headnode -p bec6b78a-a91f-11eb-b5ef-cfcd9128c9eb \
-f sv15 -a supplemental.json
The script will do the following:
- Ensure the necessary VLANs have been created
- Provision public IP space
- Provision a new server
- The server will be configured with a
custom_pxe
URL and cofnigured to boot PXE once. After the initial setup the headnode must boot the local zpool.
- The server will be configured with a
- Configure the server networking
Once the server is provisioned the script will emit the location of the "SOS" console. This allows you to connect to the server's serial console to watch progress, or troubleshoot if necessary.
You can watch an example of the entire process below.
Post-install Tasks
After server installation is complete you can ssh to the headnode. Use the
packet-cli
or Equinix Metal web interface to determine your headnode's IP
address.
ssh -l root 198.51.100.34
See the Triton On-Premises documentation for additional post-install setup tasks.
Common tasks include:
- Configuring external nics
- Setting up optional components
- CloudAPI
- Container Name Service (CNS)
- Container Monitoring Servicer (CMON)
- Virtual Networking
- Defining Packages
- User setup
- Create an External Network pool.
Create Additional Compute Nodes
Creating additional compute capacity can be done at any time after the headnode is installed, can be expanded on-demand as needed, and this process can be completed in about 20 minutes.
The following command shows an example of creating a compute node.
triton-eqm-create.sh computenode -p <project_id> -f sv15
This will do the following:
- Ensure the necessary VLANs have been created
- Provision public IP space
- Provision a new server
- The server will be configured with a
custom_pxe
URL and cofnigured to boot PXE every time. After server setup you can use thepiadm
command to enable booting from the local zpool and disablecustom_pxe
booting.
- The server will be configured with a
- Configure the server networking
- Emit commands to configure networking for this server in Triton.
After the server has fully booted you will need to run
sdc-server setup
. When the server has finished setup, copy
and paste the networking commands emitted by the triton-eqm-create.sh
command.
In order to verify that networking has been properly configured, reboot the
server a final time to ensure everything comes up properly.
If you have an external network pool, add the server's external network to the pool, and the server is ready for use.