Sizing your application
To properly size an application or process you need collect or calculate the following information.
- CPU usage - average
- CPU Usage - peak (times and durations)
- RAM usage - average
- RAM Usage - peak (times and durations)
- Disk space required - if volatile, then collect the average and max, otherwise just the max
- Disk I/O rates - average and peak (KB/sec)
- Network I/O rates (KB/sec)
Use the average and max figures for CPU, Ram and Disk, as these can be dynamically re-configured on Triton to cater for peak workloads.
Sizing a new application
The ideal way to size new applications is through testing against generated realistic workloads. This can be challenging, but is the only real way to ensure accurate data and prevent oversizing instances.
Applications rarely scale their resource usage linearly in relation to the number of users/connections/transactions. Generally, resource usage per user will go down as the number of users goes up. If it's the other way around and resource goes up with the number of users, you have an application problem that needs to be fixed. You will not be able to cope with increasing workload.
It is essential to know exactly how resource usage will scale before you go live. Services will often need to be deployed as multiple instances/containers and you need to know that adding an extra container will add 1/N capacity.
If full scale testing is not possible then you should test to the maximum scale possible and assume a linear scale up of resources. This should be the worst case scenario but should also ensure you don't have to do panicky scale ups when you go live.
When to scale up (or down)
All up scaling, both horizontally across the servers and vertically within a container, should be a pre-planned event based on measurement and prediction of growth. This ensures that you can plan for there to be enough resources available on the host server.
Apply sizing data to the cloud
Follow these steps to apply sizing data to Triton public cloud.
Identify the instance size that has the required RAM and Disk. This is your baseline and you cannot use a smaller size regardless of your CPU and I/O requirements.
Infrastructure containers use packages which labels start with g4-*
, and hardware containers use k4*-
.
Look at the CPU CAP and determine the CPU you need. Note that all other virtualization are hardware based, so whatever your VM or actual hardware was (if you come from a bare metal environment) is a real number which you can apply to the following calculation.
Your CPU measurements are a cumulative CPU value (Max=CPU Cores x 100). In this case, CPU’s = Measured CPU / 100
Round your answer up to the next integer. If you are migrating from relatively old hardware you may want to factor in increased CPU clock speed which could reduce the overall requirement.
Sizing for infrastructure containers
CPU CAP is the maximum of the overall amount of CPU you will get. Your instance has access to all the CPU Cores on the hardware so multi-threading is truly possible within the constraints of the CAP. Multi-threaded systems will typically see a benefit from being exposed to 32, 40 or even 48 cores.
Sizing for hardware instances
These are hardware instances virtualized so vCPU is actually the number of CPU’s available to the instance operating system. Multi-threading capabilities are more limited. If you relied on a certain number of cores previously then use at least the same value for your new instance size.
If the instance size selected at the start of this process does not have sufficient vCPU, then step up the instance sizes until you reach the required vCPU value.
Disk I/O
If your application is very I/O intensive you may need the benefit of solid state drives (SSD). These are obtained via the fastdisk instance sizes.
However, the combination of ZFS and modern disk systems means that normal disks are usually suitable for most systems. If you were using SSD’s previously then chose fastdisk
, otherwise stick with standard disk instance sizes. For guidance only here are some recent stats on disk performance
- SSD: the range is generally above 200 MB/s and up to 550 MB/s for cutting edge drives
- HDD: the range can be anywhere from 50–120MB / s for writes/copies.
Reads on an SSD are 30% faster, roughly.
Network IO
Theoretical bandwidth is 10GB/s for infrastructure containers and 1GB/s for hardware containers. It is almost impossible for anything other than the very largest instance sizes to saturate the network interface. It is more likely that you will exhaust CPU first and so your CPU measurement will already have taken this into account.