Managing Linux servers with Ansible

Ansible is an open source, configuration management and automation tool sponsored by Red Hat. Ansible lets you define the state that your servers should be in using YAML and then proceeds to create that state over SSH. For example, the state might be that the Apache web server should be present and enabled.

The great thing about Ansible is if the server is already in the state that you’ve defined then nothing happens. Ansible wont try to install or change anything.

In this post I’m going to show how to set up Ansible to manage Linux servers.

I have an Enterprise Linux 8 installation for the control node, and I’ll be configuring cloned machines in VirtualBox to run Ansible against. Ensure you have at least 2 Linux machines set up and connected to each other remotely. If you don’t already have Linux installed, you can follow my article to install Enterprise Linux.

I also recommend generating SSH keys for passwordless remote access. This isn’t entirely necessary, but I get bored typing my password in all the time. On your control node, generate an SSH key with ssh-keygen.

$ ssh-keygen

You can hit enter for each of the prompts when generating the key.

Once the key is ready, copy it to the remote host. In my case my remote host is called elhost2, which I’ve configured in /etc/hosts.

$ ssh-copy-id [email protected]

Substituting the username for your own username on the remote server, unless your name is also Dave, then you can leave it.

The user account on the remote servers needs to have sudo privileges to be able to perform tasks with Ansible. On my remote host the user account that I’m using has been added to the wheel group and I’ve allowed the wheel group to run sudo commands without a password by editing the sudoers file with visudo.

The AlmaLinux 8 installation that I’m using for the control node doesn’t have Ansible installed by default, so we’ll first get that setup first. Note, we don’t need to install anything on the remote hosts which is one of the benefits of Ansible.

On your control node, which in my case is the first virtual machine with the hostname elhost1, install the following packages.

$ sudo dnf install epel-release
$ sudo dnf install ansible

EPEL Release is Extra Packages for Enterprise Linux and is a repository that makes additional software packages available that are not bundled with Enterprise Linux by default, including for our purposes, Ansible.

Once Ansible is installed, we can test it’s working correctly by trying to communicate with the remote host. Ansible has various methods of interacting with hosts, one way is through ad-hoc commands from the command line or by running playbooks. Ad-hoc commands are fine for quick tasks or for testing something, however playbooks are usually the recommended approach for most cases.

For the sake of confirming that Ansible is working and we can reach the remote host, let’s run an ad-hoc command using the setup module.

First, create a file to store your host inventory. An inventory file is basically just a list of hosts that Ansible can work with. Hosts can be listed using either IP address or hostnames and can also be grouped together for different purposes. Ignoring that for a second, we’ll start with a simple hosts file with one IP address for the remote host we’re configuring.

In your users home directory, create an ansible project directory and an empty file called hosts.

$ mkdir ansible
$ cd ansible
$ touch hosts

Open the hosts file and just insert the one lonely IP address we are going to talk to. In my case it’s 10.0.2.25. Again, you can use hostnames if your /etc/hosts file is setup for name resolution as well.

Now we can run our first ad-hoc command.

$ ansible -i hosts -m setup all

Note the arguments being passed to ansible. The ‘-i hosts‘ argument loads the inventory file we created. The word ‘hosts’ in this instance is the name of the inventory file. If you called your inventory file ‘rockinlinuxservers.txt’ you would type ‘-i rockinlinuxhosts.txt’.

The -m setup loads the setup module. The setup module gathers all the facts about the host and displays them in JSON format.

If that worked and Ansible is able to run you should see a whole bunch of JSON spill down the terminal. This is the output of the setup module running against the remote host in the inventory file. Feel free to look over it to see what information Ansible was able to get.

Now that I can communicate with the server using Ansible, I’m going to deploy the Apache web server.

As mentioned, the previous setup command was known as an ad-hoc Ansible command and is essentially just for running once off commands across your fleet. Normally though you’d define your tasks in a playbook and instruct Ansible to run those which would allow you to ensure the state is always how you expect it be and also allows you to store your tasks in version control to track changes.

A playbook is a text file with the .yml or .yaml extension and is structured in a specific way. I wont go into what YAML is here because it’s pretty easy to understand just by looking at the code.

To install the Apache web server, create a new file in your Ansible project directory called web-server.yml and add the following:

---
- name: Install Apache
  yum: 
    name: httpd 
    state: present

This task instructs Ansible to use the yum module which is the package manager module for Red Hat based Linux systems, and install the httpd package. Setting state to present ensures that if the server doesn’t have Apache installed then yum will install it.

Once it’s installed we want to make sure it’s running. So we define an additional task beneath the previous one in the same yaml file.

- name: Start apache
  service:
    name: httpd
    state: started
    enabled: yes

This task tells systemd to ensure the httpd service is started and enabled at boot.

How you structure your Ansible project will depend on the scope of tasks you need completed. For small tasks like this I can use a simple playbook, but for larger installations with many tasks you might consider using ansible roles. Roles are a bit beyond what I want to discuss here, so I’ll stick with a simple playbook. The complete playbook might look like this:

---
- hosts: www
  become: yes

  tasks:
    - name: Install Apache
      yum:
        name: httpd
        state: present

    - name: Start Apache
      service:
        name: httpd
        state: started
        enabled: yes

YAML files start with 3 dashes, and then on the next line we define which hosts we require to have the following tasks run on. In this example the hosts group is www which we haven’t created yet in the inventory file. Become: yes is telling Ansible to elevate privileges like when you type sudo before a command.

Open the hosts inventory file and update it to look like this:

[www]
10.0.2.25

The [www] defines the host group. In large environments you’d likely have your hosts split into multiple groups depending on the services that should be running.

Then run the playbook with the following command:

$ ansible-playbook -i hosts web-server.yml 

The command ansible-playbook is used instead of ansible that we ran previously and instead of passing the -m command to load a module we pass the name of the playbook, in this case web-server.yml.

Ansible will read the playbook and reach out to each host in the inventory then proceed to do whatever tasks are necessary to reach the desired state, in our case that the Apache web server is installed and running.

The yellow “changed” text in the output tells you that before running the playbook the state wasn’t matching what you defined and that Ansible “changed” the state of your host. This is good, this means it worked. If you run the same playbook again you should see slightly different output.

See how the tasks that previously said “changed” in yellow are now “Ok” and in green. This is Ansible telling you that the host is already in the desired state and nothing needed to change.

To make httpd accessible remotely, we need to tell the firewall on the remote host to allow traffic to port 80. Add an extra task below the previous two.

---
- hosts: www
  become: yes

  tasks:
    - name: Install Apache
      yum:
        name: httpd
        state: present

    - name: Start Apache
      service:
        name: httpd
        state: started
        enabled: yes

    - name: Allow traffic to port 80
      firewalld: 
        name: http
        permanent: yes
        immediate: yes
        state: enabled

Re-run the playbook and see what changed.

We should now be able to open firefox and navigate to the server IP to see the default Apache webpage.