Linux LXC Containers
Overview (LXC, Linux Triton Compute Node)
Linux Triton Compute node is a feature product to include Linux as base Compute Node OS to take advantage of Linux LXC/LXD Containers managed within Triton Datacenter. Linux Compute Node systems allow Triton to expand its hardware compatibility list, to be more inclusive of the Linux ecosystem capabilities and to alleviate developer & product owner concerns around compatibility with SmartOS systems. Linux Compute Nodes will run side by side with SmartOS Compute Nodes, allowing Triton operators to determine the appropriate Compute Node kernel based on application needs. Linux LXC containers in Triton maintain a high performance, virtualization-free container architecture, while offering increased compatibility with Linux Kernel features. This includes things like iptables and sysctl variables. Triton LXC instances use standard images provided by the Linux community or Canonical. Triton Linux Compute nodes may be an ideal PI to power full Linux bare metal hosts to power large big data hadoop workloads, or other specialized hardware or linux OS dependent application stacks.
Status
Linux Triton Compute node is a technology preview in early, active development and is not likely appropriate for production or mission critical workloads yet. We invite community participation in identifying ideal use cases, roadmap priority of Triton integrations and any bugs or compatibility issues in their testing. Triton Linux Compute node does not currently contain HVM capabilities and relies solely on LXC instances utilizing ZFS on Linux. Triton Linux Compute Nodes are fully integrated into Triton CloudAPl. Linux Compute Nodes are not fully compatible with all Triton features, and current gaps in functionality are noted below. Please note, importantly, that Linux Compute node LXC instances do not currently support Triton’s Fabric/VXLAN networking so only traditional vlans are currently supported for LXC instances. Questions, bugs or other comments can find us watching on IRC, mailing list or contacting your account manager if you have enterprise support. We will update this page as gaps are closed and improvements are merged & released.
Prerequisites
Your administrator must set up and establish Linux Compute Nodes in your Triton Deployment. If LXC/LXD instances are enabled, you will find images labeled LXD available using the triton image ls command. Once Triton Linux Compute Node is set up and enabled, users may provision LXC Instances on traditional vlan networks similarly to Virtual Machines, LX, and SmartOS Instances. Administrators may find installation details here (link to other page).
Provisioning Linux Containers (Image)
LXC instances can be provisioned using images of type lxd. Check with your Triton administrator to see if there are preferred dedicated LXC packages that should be used. If not, you can use any non-HVM instance package type without issue. Users may provision LXC instances on traditional vlan networks similarly to Virtual Machines, LX and SmartOS instances.
Managing Linux Containers (resize, storage, networking, cns, ssh, etc)
SSH
SSH keys will be added to the root
and admin
(or ubuntu) user’s
~/.ssh/authorized_keys
file. If LXC instances are on a network that you can
route to, you will be able to ssh directly to them.
Networking
LXC containers do not yet support fabric networking. Networks assigned to instances must be traditional VLANs. If a fabric network is added to an LXC instance the Add Network or Provision operation will fail. External and Internal traditional VLANs, as well as multi-homed instances are supported.
Container Naming System
LXC instances will automatically be assigned CNS names, including CNS service
names. However, because mdata-put/delete operations are not yet supported it’s
not yet possible to set triton.cns.status
from inside the container. Instead,
you can set this from CloudAPI.
Known Triton Gaps / Divergences
Linux LXC containers do not support the following Triton features.
- CMON metrics
- Delegated ZFS dataset
- Fabric Networks (may be added at a future date)
- Instance resize
- Metadata GET/PUT from inside the container (in progress)
- User-scripts (in progress)
- Volumes (may be added at a future date)
- /native tools (adding soon)
- Triton Custom Image (may be added at a future date)
- Triton Cloud Firewall (may be added at a future date, iptables can be used inside the instance)
- LXD public image coverage (not all images are supported)
- CPU Fair Share Scheduler / CPU Package Caps behave differently than the rest of Triton
CPU capacity management uses different mechanisms on Linux. This means that when a compute node is not under load, instances are able to burst higher and for longer without CPU scheduling by Triton, utilizing native Linux cpu scheduling features. This may give the false impression that performance is “degraded” when the CN is under high load. This is not in fact the case. When a CN is under high load, all instances are guaranteed a minimum performance relative to the chosen package and the number/speed of the physical processors. The fact that an instance can consume more than its fair share when load is low does not mean that instances are being deprived of CPU time when load is high.
Questions, bugs or other comments can find us watching on IRC, mailing list or contacting your account manager if you have enterprise support. We will update this page as gaps are closed and improvements are merged & released.