Sizing your application
To properly size an application or process you need collect or calculate the following information:
- CPU usage - average
- CPU Usage - peak (times and durations)
- RAM usage - average
- RAM Usage - peak (times and durations)
- Disk space required - if volatile, then collect the average and max, otherwise just the max
- Disk I/O rates - average and peak (KB/sec)
- Network I/O rates (KB/sec)
Use the average and max figures for CPU, Ram and Disk, as these can be dynamically re-configured on Triton to cater for peak workloads.
Network I/O rates are governed by the hardware and typically cannot be significantly adjusted, other than through migration to newer hardware. In most systems, you will run out of CPU long before the network interfaces are saturated.
Sizing a new application
The ideal way to size new applications is through testing against generated realistic workloads. This can be challenging, but is the only real way to ensure accurate data and prevent oversizing instances.
Applications rarely scale their resource usage linearly in relation to the number of users/connections/transactions. Generally, resource usage per user will go down as the number of users goes up. If it's the other way around and resource goes up with the number of users, you have an application problem that needs to be fixed. You will not be able to cope with increasing workload.
It is essential to know exactly how resource usage will scale before you go live. Services will often need to be deployed as multiple instances/containers and you need to know that adding an extra container will add 1/N capacity.
If full scale testing is not possible then you should test to the maximum scale possible and assume a linear scale up of resources. This should be the worst case scenario but should also ensure you don't have to do panicky scale ups when you go live.
Apply sizing data to Triton
Follow these steps to apply sizing data to Triton.
- Identify the instance size that has the required RAM and Disk. This is your baseline and you cannot use a smaller size regardless of your CPU and I/O requirements.
-
Look at the CPU and determine the vCPU you need. Note that all other virtualization are hardware based, so whatever your VM or actual hardware was (if you come from a bare metal environment) is a real number which you can apply to the following calculation. Your CPU measurements may either be:
- A percentage of all CPU available (Max=100). In this case, vCPU’s = (Measured CPU * Cores) / 100.
- A cumulative CPU value (Max=CPU Cores x 100). In this case, CPU’s = Measured CPU / 100.
Round your answer up to the next integer. If you are migrating from relatively old hardware you may want to factor in increased CPU clock speed, which could reduce the overall requirement.
Sizing for infrastructure containers
vCPU is a CAP on the overall amount of CPU you will get. Your instance has access to all the CPU Cores on the hardware so multi-threading is truly possible within the constraints of the CAP. Multi-threaded systems will typically see a benefit from being exposed to 32, 40 or even 48 cores.
Sizing for hardware instances
These are hardware instances virtualized so vCPU is actually the number of CPUs available to the instance operating system. Multi-threading capabilities are more limited. If you relied on a certain number of cores previously then use at least the same value for your new instance size.
If the instance size selected at the start of this process does not have sufficient vCPU, then step up the instance sizes until you reach the required vCPU value.
Disk I/O
If your application is very I/O intensive you may need the benefit of SDD’s. These are obtained via the fastdisk instance sizes. However the combination of ZFS and modern disk systems means that normal disks are usually suitable for most systems. If you were using SSDs, choose fastdisk
, otherwise stick with standard disk instance sizes. For guidance only here are some recent stats on disk performance:
- SSD: the range is generally above 200 MB/s and up to 550 MB/s for cutting edge drives
- HDD: the range can be anywhere from 50 – 120MB / s for writes/copies. Reads on an SSD are 30% faster, roughly.
Network IO
Theoretical bandwidth is 10GB/s for infrastructure containers and 1GB/s for hardware containers. It is almost impossible for anything other than the very largest instance sizes to saturate the network interface. It is more likely that you will exhaust CPU first and so your CPU measurement will already have taken this into account.
Swap
In all Triton packages, the max_swap
must be at least two times the size of max_physical_memory
. For example, if max_physical_memory
is set to 16GB, the max_swap
must be at least 32GB. This allows for overhead memory to be allocated during startup.
When to scale up (or down)
All up scaling, both horizontally across the servers and vertically within a container, should be a pre-planned event based on measurement and prediction of growth. This ensures that you can plan for there to be enough resources available on the host server.
Adding additional disks
In bhyve, the package space adds two disks. As an admin, you can now add up to 6 additional disks for a total limit of 8. You can commands vmadm
that will allow you to do this.
To begin adding additional disks, run the following commands:
#vmadm get UUID | grep flex
To display the properties for the ZFS file system:
#zfs list -o
type,volsize,quota,refquota,reservation,refreservation,name -r -t all
zones/UUID
To increase the quota to 21.6G:
#vmadm update UUID flexible_disk_size=$((20 * 1024))
To add a disk:
- First, stop the vm.
- Make sure that you have an
add disk payload json file
. If you don't use to use a pci slot, the next one will be chosen. - Run the command:
#vmadm update UUID -f add_disks.json
What do you want to do next?
- To learn more about when to resize instances, including adding additional disks, see When to resize an instance.
- See Managing packages for considerations on managing packages.
- For information on how to configure flexible disk space with bhyve, SmartOS, or Docker instances, see Configuring packages.