As mentioned earlier, we have a 1TB laptop drive in each of our XU-4 Cloudshell nodes. We pre-format them with ext4 filesystems, give them the label datavol and want the content to survive a reinstallation/redeployment of the XU-4 after it's been assembled. As a result, anytime we run our Ansible playbook, we really want to have this filesystem mounted at each boot.
We're also toying with the idea of adding an NFS or CIFS mount to each host to give access to centralised sets of tools and other resources. To achieve this, we could either duplicate the following mount task for an NFS volume, or use file templates to configure autofs on the nodes. At that point, we can re-run our playbook to update all of our nodes in one shot.
Add another include statement to the top-level Ansible playbook (ansible-playbook.yml) file and to pull in the filesystem-mounts.yml file, which we will create to include the following content:
- name: Create mount point for data volume file: path: /datavol state: directory mode: 0755 - name: Configure data volume mount and mount it mount: src: LABEL=datavol name: /datavol fstype: ext4 opts: noatime,nodiratime dump: 0 passno: 0 state: mounted
There's not a lot here that's new or specific to the XU-4, but we're essentially creating a mountpoint directory in the first task and then adding an fstab entry in the second task. We're using the filesystem label because we always label these filesystems the same way and won't see any conflicts in our environment. This also helps to allow for the occasional plugging in of other hardware and the potential device reordering that might happen on reboot.