Set up external file shares using GlusterFS

Splunk SOAR (On-premises) uses several volumes for storage. Splunk SOAR (On-premises) implements GlusterFS for scalability and security of its file shares. You can put these volumes on their own server, or any server that has adequate storage and bandwidth.

Note: You can use other file systems to provide shared storage. Any file system that meets your organization's security and performance requirements is sufficient. You need to configure the required mounts and permissions. See Supported file systems and required directories.

You can run GlusterFS as an expandable cluster of servers which provide a single mount point for access. While you can run GlusterFS on a single server, three or more servers provides more options for redundancy and high availability.

Note: These instructions cover only configuring a single server and the required shares. To achieve high availability, data redundancy, and other features of GlusterFS see the GlusterFS Documentation.

Prepare the GlusterFS server

The steps to prepare the GlusterFS server differ slightly depending on what operating system you are using.

Prepare the GlusterFS server with CentOS 7

If you are using CentOS 7, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    CODE
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for Splunk SOAR (On-premises) nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk SOAR (On-premises) ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    CODE
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
  6. Install the GlusterFS server.
    CODE
    yum update
    yum install centos-release-gluster
    yum install glusterfs-server
  7. Start the GlusterFS daemon and set it to start at boot.
    CODE
    systemctl start glusterd
    systemctl enable glusterd

Prepare the GlusterFS server with RHEL 7

If you are using RHEL 7, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    CODE
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for Splunk SOAR (On-premises) nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk SOAR (On-premises) ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    CODE
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
  6. Create a new repository file, for example, etc/yum.repos.d/CentOS-Gluster-9.repo, with the following content.
    CODE
    [gluster9]
    name=Gluster 9
    baseurl=https://vault.centos.org/centos/7/storage/$basearch/gluster-9/
    gpgcheck=1
    gpgkey=https://centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
    enabled=1
  7. Install the GlusterFS server.
    CODE
    yum update
    yum install glusterfs-server
  8. Start the GlusterFS daemon and set it to start at boot.
    CODE
    systemctl start glusterd
    systemctl enable glusterd
Note: It is possible to replace GlusterFS with Red Hat Gluster Storage. Search for "Gluster Storage" on the Red Hat website for instructions on how to configure it.

Prepare the GlusterFS server with RHEL 8

If you are using RHEL 8, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    CODE
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for Splunk SOAR (On-premises) nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk SOAR (On-premises) ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    CODE
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
  6. Create a new repository file, for example, /etc/yum.repos.d/CentOS-Gluster-9.repo, with the following content.
    CODE
    [gluster9]
    name=Gluster 9
    baseurl=https://vault.centos.org/centos/8-stream/storage/$basearch/gluster-9/
    gpgcheck=1
    gpgkey=https://centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
    enabled=1
  7. Install GlusterFS server.
    CODE
    yum update
    yum install https://vault.centos.org/centos/8-stream/PowerTools/x86_64/os/Packages/python3-pyxattr-0.5.3-18.el8.x86_64.rpm
    yum install glusterfs-server
  8. Start the GlusterFS daemon and set it to start at boot.
    CODE
    systemctl start glusterd
    systemctl enable glusterd
Note: It is possible to replace GlusterFS with Red Hat Gluster Storage. Search for "Gluster Storage" on the Red Hat website for instructions on how to configure it.

Prepare the GlusterFS server with RHEL 9

If you are using RHEL 9, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    CODE
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for Splunk SOAR (On-premises) nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk SOAR (On-premises) ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    CODE
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
  6. Create a new repository file, for example, /etc/yum.repos.d/CentOS-Gluster-11.repo, with the following content.
    CODE
    [gluster11]
    name=Gluster 11
    baseurl=https://mirror.stream.centos.org/SIGs/9-stream/storage/x86_64/gluster-11
    gpgcheck=1
    gpgkey=https://centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
    enabled=1
  7. Install GlusterFS server.
    CODE
    yum update
    yum install https://mirror.stream.centos.org/9-stream/CRB/x86_64/os/Packages/python3-pyxattr-0.7.2-4.el9.x86_64.rpm
    yum install glusterfs-server
  8. Start the GlusterFS daemon and set it to start at boot.
    CODE
    systemctl start glusterd
    systemctl enable glusterd

Prepare TLS certificates

  1. Create the TLS certificates for GlusterFS.
    CODE
    openssl genrsa -out /etc/ssl/glusterfs.key 2048
    Note: For RHEL 9, certificates go in /etc/pki/tls instead of /etc/ssl
  2. Generate the .pem key for GlusterFS. You can use a certificate from a CA instead of generating a self-signed certificate.
    CODE
    openssl req -new -x509 -days 3650 -key /etc/ssl/glusterfs.key -subj '/CN=gluster' -out /etc/ssl/glusterfs.pem
  3. Copy the glusterfs.pem file to a .ca file.
    CODE
    cp /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.ca
  4. Set ownership, read, write, and execute permissions on the glusterfs.key file.
    CODE
    chown <user>:<group> /etc/ssl/glusterfs.key
    chmod o-rwx /etc/ssl/glusterfs.key
  5. Create the directory and control file to make GlusterFS use TLS.
    CODE
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
  6. Copy the files for the TLS configuration. Store the copies in a safe place.
    Note: You will need these files to connect client machines to the file share.
    CODE
    tar -C /etc/ssl -cvzf glusterkeys.tgz glusterfs.ca glusterfs.key glusterfs.pem

Configure the shared volumes

  1. Create the shared directories used by Splunk SOAR (On-premises).
    CODE
    cd /data/gluster/
    mkdir -p apps app_states scm tmp/shared vault
  2. Create the volumes in GlusterFS from the directories. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    CODE
    gluster volume create <volume name> transport tcp <GlusterFS hostname>:/data/gluster/<volume name> force
  3. Activate SSL/TLS for each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    CODE
    gluster volume set <volume name> client.ssl on
    gluster volume set <volume name> server.ssl on
    gluster volume set <volume name> auth.ssl-allow '*'
  4. Start each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    CODE
    gluster volume start <volume name>

Configure Splunk SOAR (On-premises) to connect to the GlusterFS file shares

Follow these steps to connect your Splunk SOAR (On-premises) deployment to your GlusterFS file shares. If you are connecting a clustered Splunk SOAR (On-premises) deployment, repeat these steps for each SOAR cluster node.

Each Splunk SOAR (On-premises) node in your cluster must have the same TLS keys stored in /etc/ssl/. Make sure to use the keys generated during GlusterFS installation.

If you are using the Splunk SOAR (On-premises) web interface to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.

  1. Create the directory and control file to make GlusterFS use TLS.
    CODE
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
  2. Copy your glusterkeys.tgz file to /etc/ssl/ on the Splunk SOAR (On-premises) instance.
  3. Extract the tar file.
    CODE
    tar xvzf glusterkeys.tgz
  4. Delete the glusterkeys.tgz file from /etc/ssl/.

Sync Splunk SOAR (On-premises) cluster nodes to the shared volumes

Splunk SOAR (On-premises) nodes must sync their local files to your newly shared volumes. The local directories for apps, app_states, scm, tmp/shared, and vault contain files that need to be preserved for use by your Splunk SOAR (On-premises) instance or cluster.

CAUTION: In a clustered environment, data only needs to be synced from the first node. Syncing data from additional nodes will overwrite data from the first node.
  1. Stop Splunk SOAR (On-premises) services on each node of the cluster.
    CODE
    stop_phantom.sh
  2. Mount the local volumes to a temporary directory.
    CODE
    mkdir -p /tmp/phantom/<volume>
    mount -t glusterfs <hostname of external file share>:<glusterfs volume name> /tmp/phantom/<volume>
    CODE
    mkdir -p /tmp/phantom/shared
    mount -t glusterfs <hostname of external file share>:tmp /tmp/phantom/shared
    Note: If you get an error message mount: unknown filesystem type 'glusterfs', then you have not installed glusterfs. See Prepare the GlusterFS server.
  3. Sync local data to the temporary location.
    CODE
    rsync -ah --progress <path/to/local/volume>/ /tmp/phantom/<volume>/
    The shared directory should be synched using this command.
    CODE
    rsync -ah --progress <path/to/local/volume>/tmp/shared/ /tmp/phantom/shared/
    Repeat for each volume: apps, app_states, scm, and shared.
  4. Sync the vault.
    CODE
    rsync -ah --exclude tmp --exclude chunks --progress <path/to/local/vault>/ /tmp/phantom/vault/
    Sync the vault separately because it often contains very large amounts of data.
  5. Unmount the temporary volumes. Repeat for each volume: apps, app_states, scm, tmp/shared, and vault.
    CODE
    umount /tmp/phantom/<volume>
  6. Edit the cluster member's file system table, /etc/fstab, to mount the GlusterFS volumes. Your fstab entries must not have line breaks.
    CODE
    <glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
  7. Mount all the volumes to make them available.
    CODE
    mount /<phantom_install_dir>/apps
    mount /<phantom_install_dir>/local_data/app_states
    mount /<phantom_install_dir>/scm
    mount /<phantom_install_dir>/tmp/shared
    mount /<phantom_install_dir>/vault
  8. Update the ownership of the mounted volumes.
    CODE
    chown <phantom user>:<phantom group> /<phantom_install_dir>/apps
    chown <phantom user>:<phantom group> /<phantom_install_dir>/local_data/app_states
    chown <phantom user>:<phantom group> /<phantom_install_dir>/scm
    chown <phantom user>:<phantom group> /<phantom_install_dir>/tmp/shared
    chown <phantom user>:<phantom group> /<phantom_install_dir>/vault
  9. Start Splunk SOAR (On-premises) services on all cluster nodes.
    CODE
    start_phantom.sh