Common configurations

Modified: 08 Sep 2022 04:28 UTC

This section describes how to achieve several common configurations and works through examples of how to create those networks. For this, we'll examine a service that consists of five instances, with the following purposes:

The load balancer accepts all of the traffic from users of the service and passes it to one of three API servers. Those API servers in turn make requests to the database.

Non-fabric deployment

Traditionally, this might be deployed such that each instance has two network interfaces:

While this works, the downsides are that you need more machines with Public IP addresses than is desirable and you have to ensure that the firewall rules on the backend database instance are always kept up to date.

Single private network

This form employs a single private network on its own VLAN. To start with, you would use CloudAPI or the Triton Compute Service portal to first:

In this case, let's say we create the VLAN 300 and on top of it we create the network with the name 'app01' and went with the default setting of having this be an Internet gateway. This network has a capacity of 32 addresses; however, 5 of them are reserved, leaving us 27 addresses. The size of the network chosen should be based on both a combination of current need and the plans for future expansion.

With both of these created, we create our load balancer with two network interfaces:

We create the three API servers and the database with a single network interface:

Now, with this configuration, we have several advantages. Only the load balancer is now on the public Internet, as none of the other instances need to be reached from the public Internet by the users of this application.

Another benefit is that the private network, 'app01' is private to the customer, so one doesn't need to worry about locking it down out of the box. The only machines that will be able to communicate on that network are those that are on it. However, firewalls may still provide useful protections and can still be leveraged to provide defense in depth.

One difference with this set up to the previous one is that because every server doesn't have a public IP address, one won't be able to directly use SSH to access the instances. Instead, a common solution to this problem is to create another instance that has an address on both the public network and the private network 'app01' whose primary purpose is to be used as a means to access the other instances on the 'app01' network.

Using multiple networks

The previous example showed this using only a single network. However, another common way to phrase this problem is to use three separate networks, one for the load balancers, one for the web servers, and one for the databases. We're going to create the following three networks:

Our load balancer instance will have two interfaces:

Our web servers will have two interfaces:

Our database servers will have two interfaces:

While we have broken things up in this way, we could do it in many other ways, depending on how we want to isolate components. This means that if something happens to the load balancers, they won't be able to reach the databases. This is different from the single network example. There, every instance was able to reach every other one.

Again, like the single network example, the only instance on the public Internet is the load balancer and all the rest are not accessible. To access all of them, one may create a similar instance as in the single network example.

Single giant private network

There are many different ways to design networks and provide isolation. It may be that instead of having multiple logical networks, one wants a data-center wide private network like was traditionally used, but instead of having other customers be able to access it, it's instead private to that customer.

In this world, there's no need to create any additional networks, instead one can use the default network that we create which provides over 1000 addresses. Starting with this model and evolving into smaller, discrete networks may also be a useful means, depending on how your network needs need to scale and grow.

One network per client

Another aspect worth touching on is one where one provides a service or has its own notion of clients. For example, say you provide a platform where you manage multiple clients. A common thing to do would be to create a given VLAN or series of networks for each of the clients that you manage. This can be beneficial as it allows you to tell them that they clients won't overlap with one another.

When following this pattern, starting with smaller networks and using some kind of pattern can be beneficial. For example, if there are three different networks that you require for each client, then you might say that VLANs 1000, 2000, and 3000 are the starting points for each of those networks and that each client will always be at some id. For example, Acme, Inc. may have id 50, so their VLANs will be 1050, 2050, and 3050.

Default network

In every data center a user has a VLAN and network created automatically for them. This allows a user to get going and start creating instances right away. Triton also maintains a notion of a user's default private network. This is the network that will get used for a private address if one is required, but none is specified.

The default can be updated and will take effect for all instances created after that point, changing the default will not affect anything else.

When it comes to docker containers, by default they are given their own interface on a single private network indicated by the default network. If they end up exposing ports, then they'll also have a public interface made available.

Handling existing pre-fabric instances

There exists many instances and broader services created around them that exist in the pre-fabric world. One of the challenges that one faces is how do you move from one of those worlds to another.

We'll classify instances that exist today into four different categories of networks:

While the second and third network types are different, they can be thought of in a similar fashion.

Traditional instances have network interfaces on some subset of the first three. Standard instances on a fabric may have interfaces on the first and fourth types of network.

Because networks on a fabric are fundamentally different from the other kinds of networks, they cannot be reached from one to the other.

The solution is to make sure that instances that need to communicate have a shared pair of interfaces on the same kind of network. That means that both instances that need to network between one another need to both have an interface on one of the public network, the DC-wide private network, a private non-fabric VLAN, or a fabric network.

While fabrics are being rolled out to a datacenter, not all instances will be on machines capable of using fabrics. If you require new instances to be able to talk to existing ones, the simplest path will be to go through and add an extra interface to instances on a fabric network to have an address on the data-center wide private network.

Longer term, you'll be able to add a network interface to existing instances that would be on your fabric instead.