Triton Compute Glossary
A term used by some infrastructure providers to denote different logical data centers within a region.
Bhyve is a commonly used x86 hardware virtual machine environment. On Triton, Bhyve virtual machines are instantiated inside a process running on SmartOS.
Extends the SmartOS zone isolation environment to behave differently than the native zone, perhaps expose a different version of the native kernel or emulating an entirely different kernel. See also lx-branded zone.
Although there are many approaches to OS virtualization, those that provide security and isolation at least as good as hardware virtualization can be called container hypervisors (sometimes called "type 0" or "type C" hypervisors).
The hypervisor controls the host processor and resources, allocating what is needed to each container, and ensure they cannot disrupt each other. SmartOS, with its secure zones isolation environment, is a true container hypervisor.
See OS virtualization.
Container-native infrastructure uses a container hypervisor to securely run containers on multi-tenant bare metal, not inside VMs. Each container has independent access to the network to which it is attached, including its own network interface(s) and IP(s). Strong isolation between containers prevents interaction and interference, eliminating the security vulnerabilities of weak container implementations and the performance frustrations of noisy neighbors.
Read more about the characteristics of container-native infrastructure.
Linux distributions running securely on bare metal in an lx-branded zone enjoy elasticity and performance benefits that are not achievable when running in hardware virtual machines. Any Linux distribution running in a bare metal container on Triton SmartOS is "container-optimized," including both infrastructure containers and Docker containers.
Your Docker container images will automatically be container optimized when running in Triton's bare metal Docker service. You can run your choice of Ubuntu, Debian, CentOS, and other distributions in Triton infrastructure containers.
Crossbow is the core network implementaiton in SmartOS that supports virtual network interfaces (VNICs) and switches, virtual network spaces, and bandwidth management for each zone.
See Triton portal.
A logical collection or resources managed by Triton Compute. MNX offers our public cloud, Triton Compute Service. Private cloud customers can configure their own data centers with our Triton DataCenter software.
The data centers in the public cloud offer “shared nothing” architecture that isolates faults. Customers can distribute workloads among multiple data centers for geographic diversity, high availability, and greater survivability in case of large scale disaster.
Private cloud implementations may vary from MNX’s standards and many private cloud customers also use MNX's public cloud for bursting or as part of their business continuity plans.
Triton Compute does not include any code from Docker Inc. Instead, Triton implements the Docker Remote API to allow cloud users to use Docker-compatible tools to manage infrastructure and applications in Triton Compute. Docker images on Triton run securely and reliably in SmartOS zones.
A term used by some infrastructure providers to specify a set of virtual machines on which Docker containers may be run. The customer typically manages the VM lifecycle separately from containers and pays for the VM regardless of how many containers run on it.
On Triton, Docker containers run securely on bare metal across the entire data center, with no need to manage (or pay for) a cluster of VMs.
A Docker image is a specific format for sharable container images, including a build file (the
Dockerfile). Docker images also introduced a means of sharing images (the Docker registry) and extending them for new or specific uses.
Docker Remote API uses an open schema model. The API is used by the Docker client to communicate with the Docker engine. It can be invoked by other tools such as curl. The API allows you to get details of containers within a Docker host. Triton supports most features of Docker Remote API with Triton Elastic Docker Host alongside CloudAPI to make it easy to manage Triton infrastructure with Docker tools.
DTrace, short for Dynamic Tracing, is a framework created to troubleshoot kernel and application problems on production systems in real time. It can be used to get a global overview of a system, including CPU time, amount of memory, filesystem and network resources. DTrace can also log arguments from specific functions, as well as list processes accessing a specific file.
Docker’s term for the isolation environment in which the container image executes. Docker had previously used LXC, then switched to libcontainer. Docker containers on Triton execute within a SmartOS zone.
Hardware virtual machines (HVM) are an instance which provides isolation for software from the hardware upon which its mounted. HVMs mimic a bare-metal server setup so that the OS can run directly on top of the virtual machine without additional configuration.
The head node is a server that controls all of the other servers in a data center. It provides the applications for provisioning new SmartMachines, interfacing with the customer database, and providing the public API.
Software that manages the relationship between virtualized resources and the software that depends on them. Hypervisors for hardware virtual machines must emulate storage and network interfaces as well as trap and rewrite CPU instructions so that the many guest operating systems can interoperate on the same physical hardware. Container hypervisors use OS virtualization to isolate and secure processes and resources.
An image is a binary distribution of software which is deployed to be run in a container or virtual machine. This can be a template for an application, a database, an operating system, or other software. Images also contain permissions for specific Triton accounts as well as information which may connect the instance to other instances.
The isolation environment is what separates one container from other containers, i.e. the specific implementation of the container via OS virtualization. LXC and libcontainer are isolation environments based on Linux cgroups and namespaces. Those solutions were designed for convenience, rather than security. SmartOS zones are an alternative isolation environment designed from the start for trusted security worthy of a public cloud.
KVM is a commonly used x86 hardware virtual machine environment. On Triton, KVM virtual machines are instantiated inside a container.
Libcontainer is the execution driver for Docker. Libcontainer provides a standard interface for creating containers inside an OS. It can create containers with namespaces, cgroups, and filesystem access controls, operating in a predictable way within the OS.
A type of branded zone that presents as a Linux kernel and facilities and can run binaries compiled for the Linux kernel. A number of different Linux distributions can be run in lx-branded zones, see container-optimized Linux.
A library for managing Linux containers which depends on Linux kernel features mainlined in 2013. Early versions of Docker used LXC as the execution driver before introducing libcontainer. For security and performance reasons, MNX’s offerings are not based on LXC or any Linux container technology. See SmartOS zones for Triton’s secure alternative.
Microservices are small, modular applications which exist and deploy independently of one another. These services each run unique processes and communicate through an orchestration tool in order to create a larger application. Developing microservices is particularly useful when supporting a range of platforms and devices. By separating an application into smaller parts, updating different pieces shouldn't break the application as a whole.
See Triton portal.
See Overlay network.
See private cloud.
OS virtualization, otherwise known as container virtualization, is an alternative to hardware virtualization that leverages the operating system to host isolation environments in which applications can be independently run.
Some approaches to OS virtualization were designed for convenience and have known security limitations. Other approaches to OS virtualization, like SmartOS zones were designed for secure isolation from the start. OS virtualization solutions that provide security and isolation at least as good as hardware virtualization can be called container hypervisors.
An overlay network is a computer network built on top of another network. One example of an overlay network is the use of VLANs on a physical network. Each VLAN identifier can be used to create a unique network.
Overlay networks are implemented in an operating system by overlay devices, which combines an encapsulation method, an identifier, and a search and lookup method.
See Triton portal.
Generically, this can refer to on-premises or colocated equipment owned and managed for private use. MNX offers Triton DataCenter to manage private clouds. This is the same software we use to manage our data centers.
A public cloud is based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the general public. MNX’s public cloud, Triton Compute Service, is container-native. It allows you to securely deploy and operate containers with bare metal speeds.
A geographic location with one or more Triton data centers.
A historical name for Triton Compute.
The underlying technology that powers Triton’s container hypervisor. SmartOS is a derivative of Open Solaris forked in 2010 and maintained and updated independently since. SmartOS is open source (Github repo) and shares much code and many contributors with Illumos (Github repo where we upstream SmartOS development to Illumos).
The underlying technology that isolates and secures containers in Triton, including Docker, infrastructure, and KVM-in-a-container. Zones debuted in 2002 and have been used to power Triton’s services since 2006. Zones are an implementation of OS virtualization.
Software defined infrastructure (SDI) allows API-driven management of the compute, network, and storage resources of a data center for automatic operation.
Triton is an SDI solution that runs on commodity hardware to offer a "data center in a box" convenience.
Software defined networking (SDN) allows operators to change the network topology within a data center to suit the needs of an application without needing to rewire the data center. SDN can be controlled by the data center operator, but new approaches to SDN extend that control to cloud customers (see Triton fabric networks).
A family of products including:
- Triton SmartOS container hypervisor
- Triton Compute infrastructure orchestration
- Triton ContainerPilot application orchestration
Triton Compute is the underlying, open-source infrastructure management software that powers MNX’s public Triton Compute Service service and Triton DataCenter in private clouds worldwide. Triton Compute provides complete infrastructure virtualization and API-driven automation.
The cloud provisioning API for Triton Compute is CloudAPI, which is supported in a number of tools and language bindings, including the Triton CLI, Hashicorp's Terraform, Hashicorp's Packer. Apache Libcloud, Golang, Chef, and others.
Triton Compute also offers an implementation of the Docker Remote API works as a parallel cloud provisioning API to exposes each Triton-powered data center as a single Docker host.
MNX's public cloud, an end-to-end solution which is compatible with popular container schedulers, ensuring application portability. Containers run on bare metal with built-in networking and storage.
The open source infrastructure management software that powers MNX’s public cloud offerings is available to power private clouds, formerly known as Triton Enterprise.
It is also a historical name for Triton Compute
Triton Elastic Docker Host is the integration of the Docker Remote API and
triton-docker to make it simple to manage Triton infrastructure with Docker tools. Triton Elastic Docker Host supports most of the features of Docker's 1.22 spec. Every data center is a single Triton Elastic Docker Host endpoint.
Users have the ability to create overlay (VXLAN) networks on Triton. These networks are private to the user who creates them and offer a convenient way to isolate and securely connect application components as desired.
Fabrics support up to 4,096 private layer 2 and layer 3 networks. Cloud users can define layer 2 and 3 networks independently and connect multiple network interfaces to each compute instance, allowing complete control over the network topology of their cloud infrastructure.
The Triton portal provides access to your Triton Compute Service account, Triton Object Storage (i.e. Manta), Triton Converged Analytics, and your compute instances. From the portal, it is possible to monitor account usage and set up role based access control.
See container hypervisor.
A marketing term for a hardware virtual machine operating environment that runs in a very limited, purpose-built operating system. A commercial type 1 hypervisor example is VMware's ESX.
Compare to a container hypervisor or type 0 hypervisor.
A marketing term for a hardware virtual machine operating environment that runs on a consumer operating system.
User-defined networks are those created by the user so that they can connect or isolate application components using network topology defined by the cloud user. Some clouds call these "VPC" networks. On Triton, they're called fabric networks.
A virtual machine is a full simulation of a computer system, providing the functionality of a physical computer. VMs may involve hardware, software, or a combination of the two.
A term used by some infrastructure providers to define a private network environment running in a public cloud. See also Triton User Defined Network.
"Virtual extensible LAN," a loosely standardized overlay network definition.
ZFS is a file system and logical volume manager, integrated within Triton compute nodes. ZFS supports data compression, high storage capacity, snapshots, continuous integrity checking and automatic repair.
See SmartOS zone.