Sizing your application

Modified: 26 Jan 2023 22:12 UTC

To properly size an application or process you need collect or calculate the following information:

Use the average and max figures for CPU, Ram and Disk, as these can be dynamically re-configured on Triton to cater for peak workloads.

Network I/O rates are governed by the hardware and typically cannot be significantly adjusted, other than through migration to newer hardware. In most systems, you will run out of CPU long before the network interfaces are saturated.

Sizing a new application

The ideal way to size new applications is through testing against generated realistic workloads. This can be challenging, but is the only real way to ensure accurate data and prevent oversizing instances.

Applications rarely scale their resource usage linearly in relation to the number of users/connections/transactions. Generally, resource usage per user will go down as the number of users goes up. If it's the other way around and resource goes up with the number of users, you have an application problem that needs to be fixed. You will not be able to cope with increasing workload.

It is essential to know exactly how resource usage will scale before you go live. Services will often need to be deployed as multiple instances/containers and you need to know that adding an extra container will add 1/N capacity.

If full scale testing is not possible then you should test to the maximum scale possible and assume a linear scale up of resources. This should be the worst case scenario but should also ensure you don't have to do panicky scale ups when you go live.

Apply sizing data to Triton

Follow these steps to apply sizing data to Triton.

Round your answer up to the next integer. If you are migrating from relatively old hardware you may want to factor in increased CPU clock speed, which could reduce the overall requirement.

Sizing for infrastructure containers

vCPU is a CAP on the overall amount of CPU you will get. Your instance has access to all the CPU Cores on the hardware so multi-threading is truly possible within the constraints of the CAP. Multi-threaded systems will typically see a benefit from being exposed to 32, 40 or even 48 cores.

Sizing for hardware instances

These are hardware instances virtualized so vCPU is actually the number of CPUs available to the instance operating system. Multi-threading capabilities are more limited. If you relied on a certain number of cores previously then use at least the same value for your new instance size.

If the instance size selected at the start of this process does not have sufficient vCPU, then step up the instance sizes until you reach the required vCPU value.

Disk I/O

If your application is very I/O intensive you may need the benefit of SDD’s. These are obtained via the fastdisk instance sizes. However the combination of ZFS and modern disk systems means that normal disks are usually suitable for most systems. If you were using SSDs, choose fastdisk, otherwise stick with standard disk instance sizes. For guidance only here are some recent stats on disk performance:

Network IO

Theoretical bandwidth is 10GB/s for infrastructure containers and 1GB/s for hardware containers. It is almost impossible for anything other than the very largest instance sizes to saturate the network interface. It is more likely that you will exhaust CPU first and so your CPU measurement will already have taken this into account.

Swap

In all Triton packages, the max_swap must be at least two times the size of max_physical_memory. For example, if max_physical_memory is set to 16GB, the max_swap must be at least 32GB. This allows for overhead memory to be allocated during startup.

When to scale up (or down)

All up scaling, both horizontally across the servers and vertically within a container, should be a pre-planned event based on measurement and prediction of growth. This ensures that you can plan for there to be enough resources available on the host server.

Adding additional disks

In bhyve, the package space adds two disks. As an admin, you can now add up to 6 additional disks for a total limit of 8. You can commands vmadm that will allow you to do this.

To begin adding additional disks, run the following commands:

#vmadm get UUID | grep flex

To display the properties for the ZFS file system:

#zfs list -o
type,volsize,quota,refquota,reservation,refreservation,name -r -t all
zones/UUID

To increase the quota to 21.6G:

#vmadm update UUID flexible_disk_size=$((20 * 1024))

To add a disk:

  1. First, stop the vm.
  2. Make sure that you have an add disk payload json file. If you don't use to use a pci slot, the next one will be chosen.
  3. Run the command:
#vmadm update UUID -f add_disks.json

What do you want to do next?