In this tutorials, we'll learn how to install and configure IPFS Node on AlmaLinux 10.
What is IPFS?
InterPlanetary File System (IPFS) gives us a distributed way to store and access data using content addressing instead of locations. Running an IPFS node on a VPS lets us host content reliably, pin files, participate in the IPFS network, and build decentralized applications with full control. The steps below cover installation, configuration, system service setup, firewall rules, and basic IPFS workflows on AlmaLinux 10.
Prerequisites
Before we begin, ensure we have the following:
- An AlmaLinux 10 dedicate server or KVM VPS.
- Basic Linux Command Line Knowledge.
- A domain name, pointing A record to server IP.
How to Install and Configure IPFS Node on AlmaLinux 10
Step 1: Prepare the Host System
Start by updating the server so that Docker and other packages behave well.
Add Docker's official GPG key:
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install the Docker packages.
sudo dnf install docker-ce
Step 2: Create Host Directories for Persistence
We need directories on the host that the container will map so data isn’t lost when the container restarts.
sudo mkdir -p /var/lib/ipfs/data
sudo mkdir -p /var/lib/ipfs/staging
sudo chown -R $USER:$USER /var/lib/ipfs
Here:
/var/lib/ipfs/data → holds the IPFS repo (peer identity, blocks, datastore).
/var/lib/ipfs/staging → area for us to mount host files into IPFS for addition.
Step 3: Run the IPFS Docker Container
Referencing the official docs: you pull the image ipfs/kubo, mount the volumes, expose ports.
docker run -d \
--name ipfs_node \
-v /var/lib/ipfs/staging:/export \
-v /var/lib/ipfs/data:/data/ipfs \
-p 4001:4001 \
-p 4001:4001/udp \
-p 127.0.0.1:5001:5001 \
-p 0.0.0.0:8080:8080 \
ipfs/kubo:latest
Note: Replace 0.0.0.0 with 127.0.0.1 if you do not want to public access.
Notes:
- Port 4001 is for P2P swarm (TCP + UDP).
- Port 5001 is the RPC API (we bind to localhost only: 127.0.0.1:5001, so it’s not publicly exposed).
- Port 8080 is the HTTP Gateway (also localhost-bound in this example).
- If you want the gateway publicly accessible, you could bind 0.0.0.0:8080, but ensure you understand the security implications.
Step 4: Use the Server Profile (Production Optimisation)
If this is for a production-style deployment, we should use the server profile when initializing the repo. According to the docs:
docker run -d \
--name ipfs_node \
-e IPFS_PROFILE=server \
-v /var/lib/ipfs/staging:/export \
-v /var/lib/ipfs/data:/data/ipfs \
-p 4001:4001 \
-p 4001:4001/udp \
-p 127.0.0.1:5001:5001 \
-p 0.0.0.0:8080:8080 \
ipfs/kubo:latest
Note: Replace 0.0.0.0 with 127.0.0.1 if you do not want to public access.
This ensures the configuration is tuned for hosting, higher availability, etc.
Step 5: Verify the Container is Running & Logs
Check if the container is up:
docker ps --filter name=ipfs_node
Follow the logs to see when the daemon is ready:
docker logs -f ipfs_node
You should see output like:
- RPC API server listening on /ip4/0.0.0.0/tcp/5001
- Gateway server listening on /ip4/0.0.0.0/tcp/8080
Daemon is ready
Step 6: Add Files and Pin Content
We can add files by dropping them into /var/lib/ipfs/staging, then executing commands in the container.
echo "my file" > myfile.txt
On host:
cp myfile.txt /var/lib/ipfs/staging/
In container:
docker exec ipfs_node ipfs add -r /export/myfile.txt
Output:
added QmRDfwqbJocLAEdxeVjLbUeXPKpNcj3n9GrPUvukbvfaF5 myfile.txt
Pin the file:
docker exec ipfs_node ipfs pin add QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
To list pinned content:
docker exec ipfs_node ipfs pin ls --type=recursive
Step 7: Accessing Content via Gateway
Add port 8080 in firewall:
sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --reload
By default (in our setup) the Gateway is bound to localhost only (127.0.0.1:8080). If you expose it publicly, use:
http://<SERVER_IP>:8080/ipfs/QmRDfwqbJocLAEdxeVjLbUeXPKpNcj3n9GrPUvukbvfaF5
You will see:
my file
If you retain localhost binding, you can use an SSH tunnel or reverse proxy (e.g., Nginx) to expose securely.
Step 8: Firewall and Security Considerations
Even though we’re using Docker, the host still needs firewall rules (unless the cloud provider blocks by default). Using UFW:
sudo ufw allow 4001/tcp
sudo ufw allow 4001/udp
# Keep API and Gateway restricted to localhost if possible
sudo ufw deny 5001
sudo ufw deny 8080
sudo ufw reload
Important security note:
Never expose the API port (5001) publicly unless you fully understand the risks. The official docs issue the same warning.
Step 9: Resource Limits & Container Tuning
In containerised production you must align resource limits to prevent the Go runtime in Kubo from over-committing resources. From docs:
docker run -d \
--name ipfs_node \
-e GOMAXPROCS=4 \
-e GOMEMLIMIT=7500MiB \
--cpus="4.0" \
--memory="8g" \
-v /var/lib/ipfs/staging:/export \
-v /var/lib/ipfs/data:/data/ipfs \
-p 4001:4001 \
-p 4001:4001/udp \
-p 127.0.0.1:5001:5001 \
-p 127.0.0.1:8080:8080 \
ipfs/kubo:latest
Step 10: Stop, Restart, and Data Persistence
Stopping the container:
docker stop ipfs_node
Starting again:
docker start ipfs_node
Because /var/lib/ipfs/data is mounted, the state (peer identity, pinned blocks, repo) survives container restarts. Initialization only happens once.
Final Thoughts
Running an IPFS node inside Docker on AlmaLinux 10 gives us a controlled, reproducible environment, with separation of persistence and container lifecycle. We focused on mounting host volumes, exposing the right ports, securing the API, using production profile, and tuning resources. From here, we can add reverse proxies, integrate with web services, cluster multiple nodes, or pin large datasets.
If we want to scale up later, we can move to a Docker-Compose setup, use orchestration (Kubernetes), or join with a cluster of nodes. But this setup gives us a solid, clean baseline.

