Install and Configure Cockpit on Ubuntu 24.04

By Anurag Singh

Updated on Oct 05, 2025

Install and Configure Cockpit on Ubuntu 24.04

Learn step-by-step how to install, secure, and use Cockpit on Ubuntu 24.04 with Nginx and Certbot SSL for modern, web-based Linux server management.

Modern infrastructure doesn’t tolerate inconsistency. The moment one developer’s environment differs from another, things start breaking. That’s where configuration management steps in—and Ansible leads the charge. In this tutorial, we’ll walk through automating our initial server setup on Ubuntu 24.04 using Ansible Playbooks, explaining not just how to do it, but why each step matters.

Understanding Configuration Management and Why Ansible Matters

Before diving into commands, we need to understand what configuration management actually solves. It’s about defining our infrastructure as code—ensuring every server, whether in production or testing, is set up the same way. No manual SSH logins. No undocumented tweaks.

Ansible achieves this through:

Agentless operation: No extra software needed on managed servers; it uses SSH.
Idempotency: Running the same playbook multiple times won’t break or duplicate configurations.
Declarative syntax: We describe what we want, not how to do it.

The real beauty lies in repeatability. Once our Ansible setup works for one server, it works for a thousand.

Prerequisites

Before we begin, ensure we have the following:

Install and Configure Cockpit on Ubuntu 24.04

Step 1: Setting Up the Control Node on Ubuntu 24.04

Our control node is where we’ll run Ansible commands from. This can be our local machine or a dedicated management server.

sudo apt update && sudo apt upgrade -y

Install Ansible and dependencies

Ubuntu 24.04 includes a stable version of Ansible via the default repository:

sudo apt install ansible -y

Verify installation:

ansible --version

We should see something like:

ansible [core 2.16.x]
  python version = 3.12.x

If you prefer the latest version, use:

sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible -y

Step 2: Preparing Managed Servers for Automation

Ansible communicates with target (managed) servers over SSH. So, we’ll set up secure, passwordless access.

On the control node:

ssh-keygen -t rsa -b 4096

Press Enter through prompts to accept defaults.

Copy the SSH key to each managed server

ssh-copy-id user@target_server_ip

Test SSH access:

ssh user@target_server_ip

If no password is required, you’re good to go.

Step 3: Building Our Ansible Inventory File

The inventory file tells Ansible which servers to manage and how to group them logically.

By default, it lives in /etc/ansible/hosts, but for modular projects, we’ll create a dedicated one:

mkdir ~/ansible-automation && cd ~/ansible-automation
nano inventory.ini

Add the following:

[webservers]
192.168.56.10 ansible_user=ubuntu

[dbservers]
192.168.56.11 ansible_user=ubuntu

Replace ubuntu with you user name.

Each group (like [webservers]) helps us target specific servers when running playbooks.

We can test connectivity with:

ansible all -i inventory.ini -m ping

If all’s well, each host should respond with:

pong

That’s Ansible verifying connectivity using its ping module.

Step 4: Writing Our First Ansible Playbook

The playbook is the heart of Ansible automation—a YAML file describing the desired system state.

Create a new playbook:

nano initial_server_setup.yml

Add the following (Note: Replace /home/ubuntu/ with you user home directory path.):

---
- name: Initial Server Setup and Hardening on Ubuntu 24.04
  hosts: all
  become: yes
  vars:
    admin_user: devops
    ssh_port: 2222
    public_key_path: "/home/ubuntu/.ssh/id_rsa.pub"

  tasks:

    - name: Update and upgrade all packages
      apt:
        update_cache: yes
        upgrade: dist
        autoremove: yes
        autoclean: yes

    - name: Create new admin user
      user:
        name: "{{ admin_user }}"
        state: present
        groups: sudo
        append: yes
        create_home: yes
        shell: /bin/bash

    - name: Add authorized SSH key for admin user
      authorized_key:
        user: "{{ admin_user }}"
        key: "{{ lookup('file', public_key_path) }}"

    - name: Disable root SSH login
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: '^PermitRootLogin'
        line: 'PermitRootLogin no'

    - name: Change SSH port
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: '^#?Port'
        line: "Port {{ ssh_port }}"

    - name: Disable password authentication for SSH
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: '^#?PasswordAuthentication'
        line: 'PasswordAuthentication no'

    - name: Restart SSH service
      service:
        name: ssh
        state: restarted

    - name: Install common packages
      apt:
        name:
          - curl
          - vim
          - ufw
          - fail2ban
          - git
          - unzip
          - htop
        state: present

    - name: Configure UFW (firewall)
      ufw:
        rule: allow
        port: "{{ ssh_port }}"
        proto: tcp

    - name: Allow HTTP and HTTPS traffic
      ufw:
        rule: allow
        port: "{{ item }}"
        proto: tcp
      loop:
        - 80
        - 443

    - name: Enable UFW firewall
      ufw:
        state: enabled
        logging: 'on'

    - name: Install and start Nginx
      apt:
        name: nginx
        state: present

    - name: Enable and start Nginx service
      service:
        name: nginx
        state: started
        enabled: yes

    - name: Configure fail2ban basic jail
      copy:
        dest: /etc/fail2ban/jail.local
        content: |
          [sshd]
          enabled = true
          port = {{ ssh_port }}
          filter = sshd
          logpath = /var/log/auth.log
          maxretry = 3

    - name: Restart fail2ban service
      service:
        name: fail2ban
        state: restarted
        enabled: yes

    - name: Remove unnecessary packages
      apt:
        name: "{{ item }}"
        state: absent
      loop:
        - telnet
        - ftp
        - rsh-client
        - rlogin
        - talk
        - talkd

    - name: Set timezone to UTC
      timezone:
        name: UTC

    - name: Display completion message
      debug:
        msg: "Server hardening complete. SSH now runs on port {{ ssh_port }}."

Let’s break it down:

  • hosts: defines target systems (in this case, all servers in the inventory).
  • become: allows privilege escalation (like sudo).
  • tasks: list of actions to perform in order.
  • modules: (apt, user, service) handle specific actions.

Each task is idempotent, meaning if we rerun the playbook, Ansible won’t repeat tasks unnecessarily.

Step 5: Running the Ansible Playbook

Execute the playbook with:

ansible-playbook -i inventory.ini initial_server_setup.yml

Ansible will connect via SSH, execute each task, and print a clean summary:

PLAY RECAP *********************************************************************
192.168.56.10              : ok=19   changed=15    unreachable=0    failed=0
192.168.56.11              : ok=19   changed=15    unreachable=0    failed=0
  • ok tasks indicate nothing needed changing.
  • changed shows tasks that modified the system.
  • Re-running the same playbook should ideally return:
changed=0

That’s idempotency in action—guaranteeing consistency every time.

Step 6: Testing and Expanding Automation

Once our base setup works, we can modularize the process:

  • Add handlers for conditional restarts.
  • Separate tasks into roles (e.g., common, security, nginx).
  • Include environment variables via vars/ or .env files.

Example directory structure:

ansible-automation/

├── inventory.ini
├── playbooks/
│   └── initial_server_setup.yml
├── roles/
│   ├── common/
│   │   └── tasks/main.yml
│   └── security/
│       └── tasks/main.yml
└── vars/
    └── main.yml

This organization scales beautifully as environments grow.

Step 7: Verifying System State After Automation

To validate our changes:

ansible all -i inventory.ini -a "cat /etc/passwd | grep devops"

To confirm root login is disabled:

ansible all -i inventory.ini -a "grep PermitRootLogin /etc/ssh/sshd_config"

This verification step ensures our configurations match intent—no surprises in production.

Step 8: Why This Matters for Real-World Deployments

Manual configuration is fragile. Automation ensures:

  • Reproducibility: Every environment matches.
  • Auditability: Every change is codified.
  • Scalability: Adding 100 servers is one command, not 100 SSH logins.
  • Recovery: Server crashed? Rebuild it in minutes with the same setup.

By adopting Ansible early, we avoid the “snowflake server” problem—unique, undocumented setups that can’t be replicated.

Conclusion: From Manual Chaos to Automated Confidence

We’ve gone from a blank Ubuntu 24.04 system to a fully automated, secure baseline using Ansible Playbooks. Through understanding its idempotent nature, inventory management, and YAML-based structure, we’ve seen how configuration management isn’t just about convenience—it’s about consistency and control.

In a world where servers multiply faster than we can manually configure them, Ansible becomes our quiet, reliable assistant—turning setup chaos into predictable, repeatable automation.