Multi-node installation

Overview

This page explains how to install SaltStack Config on multiple nodes (servers) using the SaltStack Config installer. In this installation scenario, the end goal is to have four nodes, each with a different host function. Each node is also a minion to the master:

  • A Salt master node
  • A PostgreSQL database node
  • A Redis database node
  • A RaaS node, also known as the SaltStack Config node

Note

As part of VMware’s initiative to remove problematic terminology, the term Salt master will be replaced with Salt controller in SaltStack Config and related products and documentation. This terminology update may take a few release cycles before it is fully complete.

In the multi-node installation scenario, you run an orchestration highstate designed by SaltStack. The highstate runs on your master and sets up the multi-node environment. It installs the core SaltStack Config architecture on the three other nodes that will host PostgreSQL, Redis, and RaaS. For more information about how orchestration works, see Salt system architecture.

Note

It is possible to set up multiple masters or multiple RaaS nodes. It is also possible to run the Salt master service on one node, and combine two or more of the other services on a separate node. The steps to configure this kind of system architecture are not fully explained on this page. High availability or custom architecture requirements may require consultation services.

However, before setting up multiple nodes of the same type, you typically begin with the multi-node installation scenario first and then configure additional architecture later.

Tip

SaltStack Config is powered by Salt. Consult these guides to ensure your environment is following best practices when implementing Salt in your infrastructure:

Prerequisites

Before you begin the installation process, ensure you have read and completed the steps in all pre-installation pages:

Danger

For a multi-node installation, it is especially important to follow all the steps listed in the Install or upgrade Salt page. In particular, you must install the dependencies needed for the SaltStack Config installer on all four nodes in the installation. Otherwise, the multi-node installation will fail. Remediating a failed multi-node installation may require in-depth troubleshooting. If you need assistance, Contact Support.

Record key data about the four nodes

Before beginning the multi-node installation, record the following key data about each of the four nodes involved in the installation:

  • The IP addresses or DNS names
  • The minion IDs

Make sure that you clearly indicate which IP address and minion ID belongs to which host (the master node, the RaaS node, the PostgreSQL database node, the Redis database node).

As a best practice, verify that your IP addresses or DNS names are correct as incorrect IP addresses or DNS names can cause a multi-node installation failure.

Keep this data in an easily accessible record for your own reference. As you configure the orchestration, you need to input this data into several settings and variables in the configuration files. For that reason, it’s helpful to keep this record on hand throughout the multi-node installation.

Note

If you are in a virtualized environment, take care to specify the internal address, as opposed to the public address.

Static vs. dynamic IP addresses

The Redis and the PostgreSQL hosts need static IP addresses or DNS names and the configuration files need to reference those static IP addresses or DNS names. Depending on how the RaaS node is deployed, it might need a static IP address or DNS name as well. Relying on dynamic IP addresses in configurations can change and break your environment.

Setting a custom minion ID (optional)

A minion ID is a unique name given to each minion that is managed by a master. By default, the minion identifies itself to the master by the system’s hostname. However, you can assign custom IDs that are descriptive of their function or location within your network.

If you decide to customize your minion IDs, try to keep the ID brief but descriptive of its role. For example, you could use apache-server-1 to name one of your web servers or you could use datacenter-3-rack-2 after its location in a datacenter. The goal is to make the names descriptive and helpful for future reference.

To declare a minion ID:

  1. In the minion’s terminal, navigate to the directory that contains the minion’s id.conf file. By default, the directory location is etc/salt/minion.d/id.conf.

  2. Open the id.conf file in an editor. Change the id setting to your preferred minion ID. For example:

    id: postgres-database-1
    
  3. After changing a minion ID, the minion’s keys need to be accepted (or re-accepted) by the master. For specific instructions on setting up the keys, see Accept the minion keys on the master(s).

Copy and edit the top state files

In this step, you copy the orchestration files provided with the SaltStack Config installer to the master node. Then, you edit the files to reference the three nodes for RaaS, the Redis database, and the PostgreSQL database.

Note

If the SaltStack Config files are not installed on your master, follow the instructions in Transfer and import files.

To copy and edit the orchestration configuration files:

  1. On the master, navigate to the sse-installer directory.

  2. Copy the pillar and state files from the sse_installer directory into the minion’s pillar_roots and file_roots using the following commands:

    sudo mkdir /srv/salt
    sudo cp -r salt/sse /srv/salt/
    sudo mkdir /srv/pillar
    sudo cp -r pillar/sse /srv/pillar/
    sudo cp -r pillar/top.sls /srv/pillar/
    sudo cp -r salt/top.sls /srv/salt/
    

    Warning

    These instructions make some assumptions that might not be true of your directory structure, especially if you have an existing Salt installation. The instructions assume:

    • That your master is using the default directory structure. If your directory structure has been modified, you may need to modify these instructions for your custom directory structure.
    • That you do not already have a folder named sse under either your pillar or configuration state root. If this folder exists, you may need to merge them manually.
    • That you do not already have a file named top.sls inside your pillar or salt directory. If this file exists, you may need to merge it with your existing file manually.
  3. In the /srv/pillar/ directory, you now have a file named top.sls that you copied over from the installation files in the previous step. Open this file in an editor.

  4. Edit this file to define the list of minion IDs (not the IP addresses or DNS names) for your PostgreSQL, Redis, RaaS, and master. Use the IDs that you recorded earlier as you worked through the Record key data about the four nodes step.

    For example:

    {# Pillar Top File #}
    
    {# Define SSE Servers #}
    
    {% load_yaml as sse_servers %}
      - postgres-database-1
      - redis-database-1
      - saltstack-enterprise-api-server-1
      - saltmaster-1
    {% endload %}
    
    base:
    
    {# Assign Pillar Data to SSE Servers #}
    {% for server in sse_servers %}
      '{{ server }}':
        - sse
    {% endfor %}
    
  5. In the /srv/salt/ directory, you now have a file named top.sls that you copied over in step 2. Open this file in an editor and verify that it matches the following:

    base:
    
      {# Target SSE Servers, according to Pillar data #}
      # SSE PostgreSQL Server
      'I@sse_pg_server:{{ grains.id }}':
        - sse.eapi_database
    
      # SSE Redis Server
      'I@sse_redis_server:{{ grains.id }}':
        - sse.eapi_cache
    
      # SSE eAPI Servers
      'I@sse_eapi_servers:{{ grains.id }}':
        - sse.eapi_service
    
      # SSE Salt Masters
      'I@sse_salt_masters:{{ grains.id }}':
        - sse.eapi_plugin
    

After editing the top state files, proceed to the next step.

Edit the SaltStack Config settings pillar file

In this step, you edit five different sections in the SaltStack Config settings pillar mapping file to provide the values that are appropriate for your environment. These settings will be used by the configuration state files to deploy and manage your SaltStack Config deployment.

To copy and edit the SaltStack Config settings state file:

  1. On the master, navigate to the /srv/pillar/sse/ directory.

  2. Open the sse_settings.yaml file in an editor. Section 1 of this file contains four variables that correspond to the four nodes. Change the values of the four variables to the minion IDs (not the IP addresses or DNS names) for the corresponding nodes. Use the minion IDs that you recorded earlier as you worked through the Record key data about the four nodes step.

    For example:

    # PostgreSQL Server (Single value)
    pg_server: postgres-database-1
    
    # Redis Server (Single value)
    redis_server: redis-database-1
    
    # SaltStack Enterprise Servers (List one or more)
    eapi_servers:
      - saltstack-enterprise-api-server-1
    
    # Salt Masters (List one or more)
    salt_masters:
      - saltmaster-1
    

    Note

    The pg_server and redis_server variables are single variables because most network configurations only have one PostgreSQL and Redis database. By contrast, the variables for the eapi_servers and salt-masters are formatted in a list because it is possible to have more than one RaaS node and master.

  3. In Section 2 of this file, edit the variables to specify the endpoint and port of your PostgreSQL node:

    • pg_endpoint - Change the value to the IP address or DNS name (not the minion ID) of your PostgreSQL server. If you are in a virtualized environment, take care to specify the internal address, as opposed to the public address.
    • pg_port - The standard PostgreSQL port is provided, but may be overridden, if needed.
    • pg_username and pg_password - Enter the credentials for the user that the API (RaaS) will use to authenticate to PostgreSQL. This user is created when you run the configuration orchestration highstate.

    Note

    The variable is specified as the pg_endpoint as some installations may have configured a separate PostgreSQL server (or cluster) that is not managed by this installation process. If that is the case, exclude the action. Do not apply the highstate to the PostgreSQL server during the Apply the highstates to the nodes step later in the process.

  4. Repeat the previous step to edit Section 3 of this file, but instead edit the corresponding variables to specify the endpoint and port of your Redis node.

  5. In Section 4 of this file, edit the variables related to the RaaS node:

    • If this is a fresh installation, do not change the default values for the eapi_username and eapi_password variables. During the configuration orchestration, the installation process establishes the database with these default credentials. It needs these credentials to connect through the eAPI service to establish your default Targets and Jobs in SaltStack Config. You will change the default password in a later post-installation step.

    • For the eapi_endpoint variable, change the value to the IP address or DNS (not the minion ID) of your RaaS node.

      Note

      The variable is specified as the eapi_endpoint as some installations host multiple eAPI servers behind a load balancer.

    • The eapi_ssl_enabled variable is set to True by default. When set to True, SSL is enabled. You are strongly recommended to leave this enabled. SSL validation is not required by the installer, but is likely a security requirement in environments that host their own certificate authority.

    • The eapi_standalone variable is set to False by default. This variable provides direction to the configuration states if Pillar data is being used in a single-node installation scenario. In that scenario, all IP communication would be directed to the loopback address. In the multi-installation scenario, you should leave this set to False.

    • The eapi_failover_master variable is set to False by default. This variable supports deployments where masters (and minions) are operating in failover mode.

    • The eapi_key variable defines the encryption key that SaltStack Config uses to manage encrypted data in the PostgreSQL database. This key should be unique for each installation. A default is provided, but a custom key can be generated by running the following command in a separate terminal outside of the editor:

      openssl rand -hex 32
      
  6. In Section 5 of this file, edit the variables to add your unique customer identifiers:

    • The customer_id variable uniquely identifies a SaltStack deployment. It becomes the suffix of the schema name of the raas_* (API (RaaS)) database in PostgreSQL. A default is provided, but a custom key can be generated by running the following command in a separate terminal outside of the editor:

      cat /proc/sys/kernel/random/uuid
      
    • The cluster_id variable defines the ID for a set of masters when it is configured in either Active or Failover Multi-Master mode. This ID prevents minions that are reporting to multiple masters from being reported multiple times within the SaltStack Config.

Save your changes to this file and proceed to the next section.

Apply the highstates to the nodes

In this step, you refresh your system data and run the orchestration that configures all the components of SaltStack Config.

Danger

Before running the highstate, it is especially important to follow all the steps listed on the Install or upgrade Salt page. In particular, you must install the dependencies needed for the SaltStack Config installer on all four nodes in the installation. Otherwise, the multi-node installation will fail. Remediating a failed multi-node installation may require you to Contact Support.

The necessary dependencies are:

  • OpenSSL
  • Extra Packages for Enterprise Linux (EPEL)
  • Python cryptography
  • Python OpenSSL library

To apply the highstates:

  1. On the master, sync your grains to confirm that the master has the grain data needed for each minion. This step ensures that the pillar data is properly generated for SaltStack Config functionality.

    In the command that syncs the grains, you can target all minions, or you can pass in a list of the specific minion IDs for your nodes (including the master itself) in the brackets. For example:

    sudo salt \* saltutil.refresh_grains
    
    sudo salt -L 'salt-master-1,postgres-database-1,redis-database-1,saltstack-enterprise-api-server-1' saltutil.refresh_grains
    
  2. Refresh and confirm that each of the minions has received the pillar data defined in the sse_settings.yaml file and that it appears as expected.

    In the command that refreshes the pillar data, you can target all minions or you can pass in a list of the specific minion IDs for your nodes (including the master itself) in the brackets. For example:

    sudo salt \* saltutil.refresh_pillar
    
    sudo salt -L 'salt-master-1,postgres-database-1,redis-database-1,saltstack-enterprise-api-server-1' saltutil.refresh_pillar
    
  3. Confirm that the return data for your pillar is correct:

    sudo salt \* pillar.items
    

    Verify that you see pillar data related to SaltStack Config.

    Note

    You could also target a specific minion’s pillar data to verify the pillar data has been refreshed.

  4. Run the command that applies the orchestration highstate to the PostgreSQL server. Use the minion ID that you recorded for the PostgreSQL server earlier as you worked through the Record key data about the four nodes step.

    For example:

    sudo salt postgres-database-1 state.highstate
    
  5. Repeat the previous step for each of the following servers, replacing the minion ID for each server:

    • The Redis node
    • The RaaS node
    • The master node

    Note

    During the initial application of the highstate to the master, you may see the following error message: Authentication error occurred.

    This error displays because the master has not yet authenticated to the RaaS node, but the Master Plugin installation state will restart the Salt master service and the issue will be resolved automatically.

If you encounter any other errors while running the highstates, refer to the Troubleshooting page or Contact Support.

Firewall permissions

For multi-node installations, ensure firewall access is allowed on the following ports from the following nodes:

Node Default Port Is accessible by
PostgreSQL 5432 eAPI servers
Redis 6379 eAPI Servers
eAPI endpoint 443
  • masters
  • Web-based interface users
  • Remote systems calling the Enterprise API
Masters 4505/4506 All minions configured to use the related master

Next steps

Once the multi-node installation process is complete, you must complete several post-installation steps:

The first post-installation step is to install the license key. To begin the next post-installation step, see Install the license key.