Unified approach to complex environments

StepByStep (using CentOS-7.6)

Preparation of the master node

Install VirtualBox on host machine, and install VirtualBox Extension Pack. Set up a "Host-only" network, and henceforth call the physical machine hostsystem and let it have the address 192.168.56.254. Also, ensure that the dhcp-server supplied by VirtualBox/Host Network Manager is disabled.

Create a CD/DVD with a CentOS-7-minimal ISO, and use this to create the virtual instance of what will be the "virtual master" (to be called gandalf) in the cluster environment. During the set up of the virtual master:

During the actual installation of the OS on the virtual master gandalf:

If you are using virtual box, save an image of the virtual machine at this stage, CentOS-7.6_0. Or, the image can be found here.

The supplied tar-archive consists of the "Salt-tree" (under saltstack).

It is recommended that the files under the directory saltstack is placed under git, and also moved to some convenient place in the file system on the physical machine. Here it is moved to

/home/tegner/MyFiles/saltVirtDemo/saltstack
This obviously needs to be changed, but by using a specific address it is possible to give specific commands (which also needs to be changed) as an example. For the same reason it will also be assumed that these files belong to the specific user tegner on the physical machine (named hostsystem when communicating with the virtual machines).

Setting up repositories. Here this will be done on the physical machine, on which the virtual master is being prepared. Three repositories will be created, CentOS, EPEL, and Salt, and they can be created by the commands below. Note, in the commands below a specific file structure is used, and it is important to modify paths if one wants to use a different structure. Also, it might also be required to use other mirrors, if the ones listed below does not work. Finally it should again noted that these commands should be executed on the physical machine:

Thi setup is also used to test BeeGFS with ZFS, and for this to work the repositories for these also need to be set up. If you don't want to do this, you have to remove all occurences of these two in the top file. In order to setup the repository for BeeGFS one can for example follow the following steps:

To fix zfs, do for example:

mkdir my-local-mirror
cd my-local-mirror/
wget  http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm
rpm2cpio zfs-release.el7_6.noarch.rpm | cpio -iudv
reposync -c etc/yum.repos.d/zfs.repo --repo zfs-kmod
Then move zfs-kmod to under the directory:
/home/virt_disk/pub/Linux/zfs
Also put the key there. Finally run createrepo . under
/home/virt_disk/pub/Linux/zfs/zfs-kmod

Bring up the virtual machine, and log in to it. The commands that follow should be run on this virtual machine (gandalf) unless otherwise stated.

Return to the virtual machine and run:
cat > /etc/hosts << EOF
192.168.56.254 hostsystem
192.168.56.253 gandalf salt
EOF
That is, the physical machine is named "hostsystem" when communicating over the "Host-only" network.

Now it is time to manually install a few necessary packages on the virtual machine, and in order to do so one need to update the files under /etc/yum.repos.d. This can be achieved by using the directory in the "Salt-tree":

/home/tegner/MyFiles/saltVirtDemo/saltstack/salt/states/tftp/files/yum.repos.d
The files in this directory are used during the kickstart-process for the state full nodes, and the same one can be used in the manual work on the virtual master. Note that only the files in the supplied directory should exist there, and note also that the paths under yum.repos.d need to correspond to the chosen structure used for the local repositories synced above (on the physical machine). For example one can do:
cd /etc
mv yum.repos.d yum.repos.d_org
scp -r hostsystem:/home/tegner/MyFiles/saltVirtDemo/saltstack/salt/states/tftp/files/yum.repos.d .
rpm --import http://hostsystem/pub/Linux/salt/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub

Furthermore, once the system is "up" the repositories will be taken care of by Salt, and this manual step is only required during the set up phase.

It is also necessary to set up the physical machine so that it can be used as a http-server(systemctl start httpd) for the repositories. If the structure above is used this can be achieved by:

ln -s /home/virt_disk/pub /var/www/html/.
For the linking above to work, it might be necessary to modify the file system flags so that access is not denied to any of the directories in the linked path above. If the physical machine is running a firewall one might have to run (on the physical machine):
sudo firewall-cmd --zone=FedoraWorkstation --add-service=http
or something similar (one can find the relevant zone by firewall-cmd --get-default-zone).

Continue with:

yum -y update
reboot
The update-process can possibly modify yum.repos.d and it needs to be reverted to the correct state, i.e., only referring to the local repositories (as indicated above).

Once the system is up again:

yum -y install screen
yum erase kernel-3.10.0-957.el7.x86_64
systemctl disable firewalld
yum install http://hostsystem/pub/Linux/distributions/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
Note, at this stage SELinux is also disabled (modify the file /etc/selinux/config), this, as well as turning off the firewall should only be done if you are in a "secure" environment. The kernel is erased in order to minimize the size (since this installation is used to build a template for the disk less machines). And note that the version of the kernel will be different in other releases of CentOS (this one is 7.6).

The installation of the epel-release above will again "destroy" yum.repos.d, and therefore apply the same fix as previously.

Reboot the virtual machine, and then continue (on gandalf, and now you can use screen):

yum -y install salt-minion
systemctl enable salt-minion
yum -y install rsync
yum clean all
mkdir -p /netNodes/templ
rsync -av --progress --exclude="/proc/*" --exclude="/sys/*" --exclude=/netNodes / /netNodes/templ/.
The last step above is used to create a template for the root file system for the disk less nodes. Also, if you are running virtual it might be a good idea to at this time save the system as an image, for example named to CentOS-7.6_1.

At this point, the manual steps involved in the configuration are almost finished. The one thing that remains is to set up the salt-master and bringing in the "Salt-tree" on gandalf, the virtual machine:

cd /srv
rsync -vau hostsystem:/home/tegner/MyFiles/saltVirtDemo/saltstack .
yum -y install salt-master
cp saltstack/roots.conf /etc/salt/master.d
systemctl enable salt-master
systemctl start salt-master
systemctl restart salt-minion
salt-key -A
In what follows Salt is used for everything required to bring up the chosen environment, and the specific details of how this is done is not described here, but can instead be found in the respective states, under the "Salt-tree". The specific "test environment" used in this case is configured in the file /srv/saltstack/salt/_grains/my_grains.py and described in Structure and salt implementation.

The steps required to bring up this environment are:

salt-call saltutil.sync_all
salt-call state.highstate
salt-call state.sls states.create_keys
salt-call state.sls states.diskless_setup
salt-call state.sls states.copy_file_tree
salt-call state.sls states.boot_files
Under the state create_keys ssh-keys are created on the master, as well as the file authorized_keys in order to allow password less login from the master to the other virtual machines. To distribute this file to the other machines, it is placed on the "Salt-tree", under /srv/saltstack. If one has chosen to use git to handle the files, it might be a good idea to update the git archive at this point. If one instead is using rsync one might want to update the "Salt-tree" on the physical machine (hostsystem) by executing the command below on the virtual master, gandalf (note, an "n" flag is added to the command below, it should be removed when when it is ascertained that the command works as intended):
rsync -vaun /srv/saltstack/ tegner@hostsystem:/home/tegner/MyFiles/saltVirtDemo/saltstack/

The state copy_file_tree is responsible for initializing the root file trees for the disk less machines, this means that if this state is executed, all changes introduced to the root file system will be removed. If it is desired to keep the state for machines belonging to a specific role (e.g., faramir), or to a specific node (e.g., f001) this can be achieved by modifying the lists exclude_roles or exclude_nodes in that state.

The state boot_files generates the files necessary for booting the machines over tftp/PXE/dhcp. Those machines that are not disk less are using kickstart to install the OS on the local hard drive, and as part of that process the corresponding "boot file" generated by the state boot_files is removed (if it wasn't the machine would never have the chance to boot from the hard drive, and it would be constantly reinstalled). In the same way as for the state copy_file_tree either roles or nodes can be excluded by modifying lists in the actual state. It is a rather "crude" solution, it would possibly be better to define the lists in grains?

In order to bring up the nodes the correct mac-addresses needs to be inserted in the file:

/srv/saltstack/salt/_grains/my_grains.py
And after that the changes need to be "executed":
salt-call saltutil.sync_all
salt-call state.highstate
After this the virtual nodes in the environment can all be started, but some steps needs to be done, some of which are applicable if applying the method to real hardware: Once the nodes are up, they are then configured, according to the Salt-tree, by:
salt-key -A
salt '*' saltutil.sync_all
salt '*' state.highstate