Installing the Triton head node
When you install Triton DataCenter, the configuration program prompts for values to configure the system. This page explains the guided configuration process and references the vales defined on the Installing Triton DataCenter page.
You can review the installation steps at Installing Triton DataCenter.
Initial boot
MNX strongly recommends that you use an internal port for the USB key. Using an internal port reduces the risk of the USB key being unseated from the port.
-
Insert the USB key containing the installation media into a USB port.
- Boot the head node from the USB key.
Changing the console device
By default, Triton uses the second serial port (ttyb) for the SOL console.
- To change the default console device, when prompted with the Loader menu, press 5 to display the Boot Options menu.
#
# #### # # ###### # # #####
# # # # # # ## # #
# # # # ##### # # # #
# # # # # # # # # #
# # # # # # # ## #
##### #### # ###### # # # TM
Welcome to SmartOS
1. Compute Node (PXE)
2. Boot 20190204T130507Z
3. [Esc]ape to loader prompt
4. Reboot
Options:
5. Configure Boot [O]ptions...
- Press 2 to cycle through the OS Console values.
#
# #### # # ###### # # #####
# # # # # # ## # #
# # # # ##### # # # #
# # # # # # # # # #
# # # # # # # ## #
##### #### # ###### # # # TM
Welcome to SmartOS
1. Back to Main Menu [Backspace]
Boot Options:
2. OS [C]onsole.......... text
3. [V]erbose............. Off
4. k[m]db................ Off
5. [D]ebug............... Off
6. [R]escue Mode......... Off
- Press 1 to return to the main menu.
- Press 2 to boot from the USB key.
For more information on the Boot Options menu, see Advanced head node boot options.
Note: If the system sits a flashing cursor or does not show any activity for an extended period of time (greater than 20 minutes) you most likely need to redirect the console.
Triton Installer
Once the system has booted, the Triton installation program starts automatically. You are presented with a series of questions. The answers that you supply are used to generate the configuration file used for installation. If a default value is available, it will appear in [brackets]
.
The configuration of general variables uses information collected in Triton deployment planning.
Smart Data Center (SDC) Setup
Data Center Information http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
The following questions will be used to configure your headnode identity.
This identity information is used to uniquely identify your headnode as well
as help with management of distributed systems.
The data center *region* and *name* will be used in DNS names. Typically
the region will be a part of data center name, e.g. region_name=us-west,
datacenter_name=us-west-1, but this isn't required.
Entry | Description | Examples |
---|---|---|
Company name | The name of your company. | Acme, Inc. |
Data center region | The region in which the data center is located. | us-west |
Data center name | The name of the data center, without spaces and in lowercase font. This field describes the collection of systems handled by the head node. | myregion-2 |
City and State | Identifies location of your data center. | San Francisco, CA |
The configuration of Admin networking variables uses information collected in Triton initial network configuration.
Smart Data Center (SDC) Setup
Networking http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Several applications will be made available on these networks using IP
addresses which are automatically incremented based on the headnode IP.
In order to determine what IP addresses have been assigned to SDC, you can
either review the configuration prior to its application, or you can run
`sdc-netinfo` after the install.
Press [enter] to continue
Smart Data Center (SDC) Setup
Networking - Admin http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
The admin network is used for management traffic and other information that
flows between the Compute Nodes and the headnode in an SDC cluster. This
network will be used to automatically provision new compute nodes and there are
several application zones which are assigned sequential IP addresses on this
network. It is important that this network be used exclusively for SDC
management. Note that DHCP traffic will be present on this network following
the installation and that this network is connected in VLAN ACCESS mode only.
Number Link MAC Address State Network
1 bnx0 78:2b:cb:0a:77:e1 up -
2 bnx2 78:2b:cb:0a:77:e5 down -
3 bnx1 78:2b:cb:0a:77:e3 down -
4 bnx3 78:2b:cb:0a:77:e7 down -
5 igb0 00:1b:21:91:a5:e0 unknown -
6 igb2 00:1b:21:91:95:20 unknown -
7 igb1 00:1b:21:91:a5:e1 unknown -
8 igb3 00:1b:21:91:95:21 unknown -
Enter the number of the NIC for the *admin* interface: 1
(admin) headnode IP address: 10.0.3.7
(admin) headnode netmask [255.255.255.0]:
(admin) Zone's starting IP address [10.0.3.8]:
Note: If there is only one interface, it will skip the printing and prompting for the interface.
Entry | Description | Examples |
---|---|---|
number of NIC for the admin interface | The number of the NIC in the list of NICs. Each NIC is identified by its MAC address and its interface number. | Choose the number of the NIC that is connected to the private network. |
(admin) headnode IP address | The IP address of the global zone of the head node. Since the configuration program allocates the next 11 addresses, be sure to choose an initial address that allows for this. | 10.0.0.2 |
(admin) headnode netmask | The netmask that describes the address space of the admin network. | 255.255.255.0 |
Configuration of External networking variables uses information collected in Triton initial network configuration.
Smart Data Center (SDC) Setup
Networking - External http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
The external network is used by the headnode and its applications to connect to
external networks. That is, it can be used to communicate with either the
Internet, an intranet, or any other WAN. This is optional when your system does
not need access to an external network, or where you want to connect to an
external network later.
Add external network now? [Y/n]:
Number Link MAC Address State Network
1 bnx0 78:2b:cb:0a:77:e1 up *admin*
2 bnx2 78:2b:cb:0a:77:e5 down -
3 bnx1 78:2b:cb:0a:77:e3 down -
4 bnx3 78:2b:cb:0a:77:e7 down -
5 igb0 00:1b:21:91:a5:e0 unknown -
6 igb2 00:1b:21:91:95:20 unknown -
7 igb1 00:1b:21:91:a5:e1 unknown -
8 igb3 00:1b:21:91:95:21 unknown -
Enter the number of the NIC for the *external* interface: 5
(external) headnode IP address: 151.1.224.130
(external) headnode netmask [255.255.255.0]: 255.255.255.192
(external) gateway IP address: 151.1.224.129
(external) VLAN ID [press enter for none]: 102
Starting provisionable IP address [151.1.224.10]:
Ending provisionable IP address [151.1.224.250]:
Note: If there is only one interface, it will skip the printing and prompting for the interface.
Entry | Description | Examples |
---|---|---|
number of NIC for the external interface | The number of the NIC in the list of NICs. Each NIC is identified by its MAC address and its interface number. | Choose the number of the NIC that is connected to the external network. |
(external) headnode IP address | The IP address of the head node on the external network. | 151.1.224.7 |
(external) headnode netmask | The netmask that describes the address space of the external network. | 255.255.255.0 |
(external) VLAN ID [press enter for none]: | If your external network uses VLANs, provided its number here. | Press [enter] for no VLAN, otherwise and integer from 1 to 4095 |
Starting provisionable IP address []: | The first available IP address to assign to a newly provisioned instance. This address must be in the network defined by the external network IP address and the external netmask. | 151.1.224.10 |
Ending provisionable IP address []: | The last available IP address to assign to a newly provisioned instance. This address mus be in the network defined by the external network IP address and the external netmask. | 151.1.224.250 |
The default gateway determines the network that the head node zones uses to connect to outside networks.
Smart Data Center (SDC) Setup
Networking - Continued http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
The default gateway will determine which router will be used to connect to
other networks. This will almost certainly be the router connected to your
'External' network. Use 'none' if you have no gateway.
Enter the default gateway IP [151.1.224.129]:
The DNS servers set here will be used to provide name resolution abilities to
the SDC cluster itself. These will also be default DNS servers for zones
provisioned on the 'external' network.
Enter the Primary DNS server IP [8.8.8.8]:
Checking connectivity...OK
Enter the Secondary DNS server IP [8.8.4.4]:
Checking connectivity...OK
Enter the headnode domain name: example.com
Default DNS search domain: example.com
By default the headnode acts as an NTP server for the admin network. You can
set the headnode to be an NTP client to synchronize to another NTP server.
Enter an NTP server IP address or hostname [0.smartos.pool.ntp.org]:
Checking 0.smartos.pool.ntp.org connectivity...OK
Note: Connectivity tests are performed during this stage. The DNS and NTP servers MUST be accessible.
Entry | Description | Examples |
---|---|---|
the default gateway IP | The IP address of the gateway. In most cases it's an address of a router on the external network. | 10.88.88.2 |
Primary and Secondary DNS server IP | The IP addresses of the DNS servers. The defaults give the addresses of Google Public DNS. You can use these or your own DNS servers. | |
headnode domain name | The domain name of the head node. This value is used for the default support email address and for CAPI email address. Do not use spaces or uppercase letters. | example.com |
default DNS search domain | The default domain name to add to DNS searches. Do not use spaces or uppercase letters. | example.com |
NTP server address or host name | The address of an NTP server or accept the default. Do not use spaces or uppercase letters. | See the NTP Pool Project for information on selecting an NTP server close to your data center. |
Configuration of general variables uses information collected in Triton deployment planning.
Smart Data Center (SDC) Setup
Account Information http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
There are two primary accounts for managing a Smart Data Center. These are
'admin', and 'root'. Each account can have a unique password. Most of the
interaction you will have with SDC will be using the 'admin' user, unless
otherwise specified. In addition, SDC has the ability to send notification
emails to a specific address. Each of these values will be configured below.
Enter root password:
Confirm password:
Enter admin password:
Confirm password:
Administrator email goes to [root@localhost]:
Support email should appear from [help@mnxsolutions.com]:
Entry | Description | Examples |
---|---|---|
root password | The password used to access the root account on the head node and compute nodes. |
swordfish |
admin password | The password used to access the admin account. |
your_dogs_name |
Administrator email goes to | The address that receives administrator mail. | admin@example.com |
Support email should appear from | The "From" address for mail generated by SDC. | support@example.com |
Smart Data Center (SDC) Setup
Verify Configuration http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Share usage, health, and hardware data about your data center with
Joyent to help us make SmartDataCenter better.
Enable telemetry [false]:
After the configuration program runs, it gives a summary of all the entries made and asks for confirmation that they are correct.
Smart Data Center (SDC) Setup
Verify Configuration http://docs.tritondatacenter.com/sdc7
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Company name: Joyent Inc.
Datacenter Region: eu-central, Name: eu-central-1, Location: Milano, IT
Email Admin Address: root@localhost, From: support@example.com
Domain name: example.com, Gateway IP address: 151.1.224.129
Net MAC IP addr. Netmask Gateway VLAN
Admin 78:2b:cb:0a:77:e1 10.0.3.7 255.255.255.0 none none
External 00:1b:21:91:a5:e0 151.1.224.130 255.255.255.192 151.1.224.129 102
Admin net zone IP addresses start at: 10.0.3.8
Provisionable IP range: 151.1.224.10 - 151.1.224.250
DNS Servers: (8.8.8.8, 8.8.4.4), Search Domain: example.com
NTP servers: 151.1.135.200
Is this correct? [y]:
Your configuration is about to be applied.
Would you like to edit the final configuration file? [n]:
Entry | Description |
---|---|
Is this correct? | no restarts the installer configuration program again, using the previous entries as defaults. yes confirms the configuration and provides an opportunity to change the values. |
Would you like to edit the final configuration file? | no ends the configuration program and the head node setup begins. yes loads the configuration program into a vi editor. If you quit the editor, then setup will begin. You must select this option if you are configuring link aggregation for the head node. |
Configuration of head node link aggregation
This section explains how to configure the Triton head node to use link aggregation. If link aggregation is not being used for the head node, skip this step.
Adding link aggregation to the head node currently requires the editing of the configuration file. Additionally, it assumes that an answer of Y
to the Would you like to edit the final configuration file?
question in the installer process. In a future update, the installer will render this step unnecessary.
Adding aggregation to the admin interface
- Search for the line beginning with
admin_nic=XX:XX:XX:XX:XX:XX
, where XX:XX:XX:XX:XX:XX is the MAC address of one half of the aggregate. - Replace it with
admin_nic=aggr1
. - Add the line
aggr1_aggr=XX:XX:XX:XX:XX:XX,YY:YY:YY:YY:YY:YY
, where the MAC addresses provided are the two interfaces participating in the aggregation. - Add the line
aggr1_lacp_mode=active
oraggr1_lacp_mode=passive
, depending on the mode you are using. The mode must agree with the switch configuration.
Adding aggregation to the external interface
- Search for the line beginning with
external_nic=QQ:QQ:QQ:QQ:QQ:QQ
, where QQ:QQ:QQ:QQ:QQ:QQ is the MAC address of one half of the aggregate. - Replace it with
external_nic=aggr2
. - Add the line
aggr2_aggr=QQ:QQ:QQ:QQ:QQ:QQ,RR:RR:RR:RR:RR:RR
, where the MAC addresses provided are the two interfaces participating in the aggregation. - Add the line
aggr2_lacp_mode=active
oraggr2_lacp_mode=passive
depending on the mode you are using. The mode must agree with the switch configuration.
Once you have made the changes, save the file and quit the editor.
Adding link aggregation to the head node post-setup
You can add link aggregation after setup. To do do, you will need to mount the USB key and edit the configuration file directly:
- Mount the USB key using the command:
headnode# sdc-usbkey mount
- Bring up the
config
file in the editor using the command:
headnode# vi /mnt/usbkey/config
Adding aggregation to the admin interface
- Search for the line beginning with
admin_nic=XX:XX:XX:XX:XX:XX
, where XX:XX:XX:XX:XX:XX is the MAC address of one half of the aggregate. - Replace it with
admin_nic=aggr1
. - Add the line
aggr1_aggr=XX:XX:XX:XX:XX:XX,YY:YY:YY:YY:YY:YY
, where the MAC addresses provided are the two interfaces participating in the aggregation. - Add the line
aggr1_lacp_mode=active
oraggr1_lacp_mode=passive
depending on the mode your are using. The mode must agree with the switch configuration.
Adding aggregation to the admin interface
- Search for the line beginning with
external_nic=QQ:QQ:QQ:QQ:QQ:QQ
, where QQ:QQ:QQ:QQ:QQ:QQ is the MAC address of one half of the aggregate. - Replace it with
external_nic=aggr2
. - Add the line
aggr2_aggr=QQ:QQ:QQ:QQ:QQ:QQ,RR:RR:RR:RR:RR:RR
, where the MAC addresses provided are the two interfaces participating in the aggregation. - Add the line
aggr2_lacp_mode=active
oraggr2_lacp_mode=passive
, depending on the mode your are using. The mode must agree with the switch configuration.
-
Once you have made the changes, save the file and exit the editor.
- To unmount the USB key: run:
headnode# sdc-usbkey unmount
- Reboot the head node. Important: Rebooting impacts access to the core services on the head node, including orchestration.
headnode# reboot
Once the head node reboots the aggregate bundles will be active.
Setup and reboots
Once you have finished the installation steps, the head node will reboot in order to apply your configuration to the head node.
The headnode will now finish configuration and reboot. Please wait...
2014-02-17T15:26:28.853517+00:00 headnode rsyslogd3: No files configured to be monitored [try http://www.rsyslog.com/e/-3 ]
2014-02-17T15:26:28.855293+00:00 headnode genunix: [ID 540533 kern.notice] #015SunOS Release 5.11 Version joyent_20150820-20150829T195911Z 64-bit
2014-02-17T15:26:28.855307+00:00 headnode genunix: [ID 588371 kern.notice] Copyright (c) 2010-2014, Joyent Inc. All rights reserved.
2014-02-17T15:26:28.855517+00:00 headnode acpica: [ID 361365 kern.notice] ACPI: RSDP f11a0 00024 (v2 DELL )
2014-02-17T15:26:28.855522+00:00 headnode acpica: [ID 135650 kern.notice] ACPI: XSDT f12a4 0009C (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855528+00:00 headnode acpica: [ID 473354 kern.notice] ACPI: FACP 7f3b3f9c 000F4 (v3 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855534+00:00 headnode acpica: [ID 836996 kern.notice] ACPI: DSDT 7f38f000 03D72 (v1 DELL PE_SC3 00000001 INTL 20050624)
2014-02-17T15:26:28.855540+00:00 headnode acpica: [ID 871577 kern.notice] ACPI: FACS 7f3b6000 00040
2014-02-17T15:26:28.855546+00:00 headnode acpica: [ID 233916 kern.notice] ACPI: APIC 7f3b3478 0015E (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855552+00:00 headnode acpica: [ID 218462 kern.notice] ACPI: SPCR 7f3b35d8 00050 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855558+00:00 headnode acpica: [ID 358574 kern.notice] ACPI: HPET 7f3b362c 00038 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855564+00:00 headnode acpica: [ID 558911 kern.notice] ACPI: DMAR 7f3b3668 001C0 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855570+00:00 headnode acpica: [ID 848976 kern.notice] ACPI: MCFG 7f3b38c4 0003C (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855580+00:00 headnode acpica: [ID 423410 kern.notice] ACPI: WD__ 7f3b3904 00134 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855587+00:00 headnode acpica: [ID 819069 kern.notice] ACPI: SLIC 7f3b3a3c 00024 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855594+00:00 headnode acpica: [ID 340909 kern.notice] ACPI: ERST 7f392ef4 00270 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855600+00:00 headnode acpica: [ID 652589 kern.notice] ACPI: HEST 7f393164 003A8 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855612+00:00 headnode acpica: [ID 301466 kern.notice] ACPI: BERT 7f392d74 00030 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855624+00:00 headnode acpica: [ID 500178 kern.notice] ACPI: EINJ 7f392da4 00150 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855636+00:00 headnode acpica: [ID 404043 kern.notice] ACPI: SRAT 7f3b3bc0 00370 (v1 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855644+00:00 headnode acpica: [ID 777293 kern.notice] ACPI: TCPA 7f3b3f34 00064 (v2 DELL PE_SC3 00000001 DELL 00000001)
2014-02-17T15:26:28.855660+00:00 headnode acpica: [ID 797947 kern.notice] ACPI: SSDT 7f3b7000 03E04 (v1 INTEL PPM RCM 80000001 INTL 20061109)
2014-02-17T15:26:28+00:00 headnode savecore: [ID 467324 auth.error] open(""): No such file or directory
2014-02-17T15:26:28+00:00 headnode savecore: [ID 467324 auth.error] open(""): No such file or directory
_____
____ ____
_____ _____ . .
__ __ | .-. . . .-. :--. |-
_____ _____ ;| || |(.-' | | |
____ ____ `--' `-' `;-| `-' ' ' `-'
_____ / ; Joyent Live Image v0.147+
`-' build: 20150820T062843Z
headnode ttyb login:
creating pool: zones done
adding volume: dump done
adding volume: config done
adding volume: usbkey done
adding volume: cores done
adding volume: opt done
adding volume: var done
adding volume: swap done
The ...headnode acpica...
messages are hardware dependent and may not appear on your systems.
There will be a pause of several minutes at this point as data is read off the USB key.
The head node will then reboot and commences set up of the Triton software and services. This process takes between 10 and 20 minutes. During this time, you will see messages as the core service zones are imported and the zones are created.
SunOS Release 5.11 Version joyent_20150820-20150829T195911Z 64-bit
Copyright (c) 2010-2014, Joyent Inc. All rights reserved.
_____
____ ____
_____ _____ . .
__ __ | .-. . . .-. :--. |-
_____ _____ ;| || |(.-' | | |
____ ____ `--' `-' `;-| `-' ' ' `-'
_____ / ; Joyent Live Image v0.147+
`-' build: 20150820T062843Z
--> Welcome to SDC7! <--
preparing for setup... done (0s)
installing tools to /opt/smartdc/bin... done (0s)
installing sdcadm done (3s)
installing agents-master-20150820-20150829t073457z-g77ec6fd.sh... done (92s)
importing: assets-zfs-master-20150820-20150829t194437z-g1dc8d46 done (58s)
creating zone assets... done (11s)
importing: sapi-zfs-master-20150820-20150829t200202z-g56a7eec done (4s)
creating zone sapi... done (11s)
importing: binder-zfs-master-20150820-20150829t230454z-gd954fc2 done (7s)
creating zone binder... done (14s)
importing: manatee-zfs-master-20150820-20150829t222942z-gdd0afa6 done (8s)
creating zone manatee... done (18s)
...snip...
creating zone fwapi... done (14s)
importing: vmapi-zfs-master-20150820-20150829t001019z-gedd83f7 done (4s)
creating zone vmapi... done (14s)
importing: adminui-zfs-master-20150820-20150829t171315z-g5fd1444 done (4s)
creating zone adminui... done (21s)
completing setup... done (84s)
==> Setup complete (in 723 seconds). Press [enter] to get login prompt.
At this point you can press Enter and log into the head node.
Simultaneous multi-threading setting
By default, all newly installed Triton systems have simultaneous multi-threading enabled. You can modify this default by setting the suitable server parameter, if necessary.
Adding external access to adminui and imgapi
The final step in the initial head node installation and configuration process is to add external access to the adminui and imgapi core service zones. Adding an external NIC to the adminui zone allows access to the Operator portal from the external network. Adding an external NIC to the imgapi zone allows for downloading of images from the Joyent update servers.
To add these interfaces, run sdcadm post-setup
as shown in the following example:
headnode# sdcadm post-setup common-external-nics
Once this command completes, you will need to identify the IP address assigned to the adminui service. You'll use this IP address to connect to the Operations Portal.
To find the IP address assigned to the adminui service, run the command:
headnode# sdc-vmadm ips -p $(sdc-vmname adminui)
Adding the cloudapi core service zone
In order to use either the smartdc or triton you will need to install the cloudapi zone. Additionally, this is required if you will be running the user portal.
You can install the cloudapi zone using the sdcadm post-setup
command:
headnode# sdcadm post-setup cloudapi
Once the sdcadm post-setup
command completes, you will need to identify the IP address assigned to the cloudapi service in order to connect to the correct API endpoint.
To find the IP address assigned to the cloudapi service, run the command:
headnode# sdc-vmadm ips -p $(sdc-vmname cloudapi)
Adding proxy support to Triton
Triton supports the use of both authenticated and unauthenticated proxies. Visit the Configuring Triton to use a proxy document for setup details.
Linking Triton data centers
Triton supports UFDS linking, which allows two data center to share key user data such as ssh keys, passwords, and RBAC information. UFDS or Data Center Linking must be done after installing the head node in second and subsequent Triton Installations.
To link two data centers, follow the instructions in Linking Triton data centers.
Important notes:
-
The Triton installer will set the hostname of the server functioning as the head node to
headnode
. This should not be changed, because doing so will break key scripts and processes. -
By default, the Triton installer will set your prompt to include the name of the data center. This enables clients with multiple data centers to quickly determine which head node they are logged into.
- It is perfectly acceptable to add external DNS aliases to your head node as required.