Managing disk space across a fleet of remote servers is a common challenge for DevOps engineers, system administrators, and IT teams. Manual inspection doesn’t scale, and missing out on low disk space warnings can result in critical failures. In this tech concept, we walk you through setting up an automated disk usage audit system across remote Linux servers using ssh
, du
, Bash/Zsh loops, and shell scripting. We’ll generate disk usage summaries in CSV/JSON format, configure alerts via email or Slack, and schedule it all with cron
.
From writing millions of lines of code to driving tech that scaled businesses—I’ve lived the evolution of technology for over 2 decades. Now, I share this journey so every new tech enthusiast sees what’s possible and feels empowered to create the next big thing.
Why Audit Remote Server Disk Usage?
When managing distributed infrastructure, automated disk usage audits help to:
- Prevent sudden outages due to full disks
- Identify abnormal storage growth early
- Monitor log-heavy apps, backup servers, or shared storage
- Ensure compliance with storage policies
Prerequisites
To follow this guide, you should have:
- SSH access to all target servers (preferably using SSH keys)
du
,bash
,ssh
,jq
, andmail
installed on your machine- Optionally, access to Slack webhook, Telegram Webhook or SMTP server for notifications
Step 1: Create a Server List
Start by defining your server list in a text file (servers.txt
):
server1.nextstruggle.com
server2.nextstruggle.com
192.168.1.50
Make sure SSH key-based login is set up to avoid manual password entry during automation.
Step 2: Bash Script to Fetch Disk Usage via SSH
Here’s a sample script that loops over each server, runs du -sh /
, and outputs the results.
#!/bin/bash
THRESHOLD=80 # percent
REPORT="disk_usage_report.csv"
DATE=$(date '+%Y-%m-%d %H:%M:%S')
> "$REPORT" # clear report file
echo "Server,Usage,DateTime" >> "$REPORT"
while read -r SERVER; do
echo "Checking $SERVER..."
USAGE=$(ssh -o ConnectTimeout=5 "$SERVER" "df -h / | awk 'NR==2 {print \$5}'" 2>/dev/null | tr -d '%')
if [[ -z "$USAGE" ]]; then
echo "$SERVER,N/A,$DATE" >> "$REPORT"
continue
fi
echo "$SERVER,${USAGE}%,${DATE}" >> "$REPORT"
# Alert if threshold exceeded
if (( USAGE > THRESHOLD )); then
echo "ALERT: $SERVER is using $USAGE% disk!" >> disk_alert.log
fi
done < servers.txt
Step 3: Generate JSON Output Using jq
To convert the CSV report into JSON for logging or integration with other tools:
cat disk_usage_report.csv | awk -F, 'NR>1 {print "{\"server\": \""$1"\", \"usage\": \""$2"\", \"datetime\": \""$3"\"}"}' | jq -s '.' > disk_usage_report.json
Or, add JSON output directly in your script:
echo "[" > disk_usage_report.json
while read -r SERVER; do
USAGE=$(ssh "$SERVER" "df -h / | awk 'NR==2 {print \$5}'" | tr -d '%')
echo "{\"server\":\"$SERVER\",\"usage\":\"${USAGE}%\",\"datetime\":\"$DATE\"}," >> disk_usage_report.json
done < servers.txt
sed -i '$ s/,$//' disk_usage_report.json # Remove trailing comma
echo "]" >> disk_usage_report.json
Step 4: Send Notifications via Email
Use the mail
command to send an alert if usage crosses the threshold:
if grep -q "ALERT" disk_alert.log; then
mail -s "Disk Usage Alert" [email protected] < disk_alert.log
fi
Install mailutils
if mail
is unavailable:
sudo apt install mailutils # Debian/Ubuntu
Step 5: Integrate Slack Alerts (Optional)
To send alerts to a Slack channel, create an Incoming Webhook URL and use curl
:
WEBHOOK_URL="https://hooks.slack.com/services/XXX/YYY/ZZZ"
MESSAGE=$(cat disk_alert.log)
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"Disk Usage Alert:\n$MESSAGE\"}" \
"$WEBHOOK_URL"
Add this block to the main script if you want alerts sent immediately after detection. Use similar concept for Telegram webhook too.
Step 6: Schedule with Cron for Automation
To automate this script, schedule it via cron
:
- Open crontab:
crontab -e
- Add an entry to run every day at 6 AM:
0 6 * * * /path/to/disk_audit.sh >> /var/log/disk_audit.log 2>&1
Make sure your script has executable permissions:
chmod +x /path/to/disk_audit.sh
Example Output
CSV File:
Server,Usage,DateTime
server1.nextstruggle.com,75%,2025-06-18 06:00:01
server2.nextstruggle.com,89%,2025-06-18 06:00:01
JSON File:
[
{
"server": "server1.nextstruggle.com",
"usage": "75%",
"datetime": "2025-06-18 06:00:01"
},
{
"server": "server2.nextstruggle.com",
"usage": "89%",
"datetime": "2025-06-18 06:00:01"
}
]
Slack Message:
Disk Usage Alert:
ALERT: server2.nextstruggle.com is using 89% disk!
My Tech Advice: Proactive disk space monitoring across remote servers is essential to avoid system downtime and ensure smooth operations. By combining
ssh
,du
,df
, shell scripting, and notification services like email or Slack, you can build a robust, automated disk usage auditing system. When scheduled with cron, this lightweight solution can serve as your first line of defense against storage-related incidents in production environments.Ready to build your own server solution ? Try the above tech concept, or contact me for a tech advice!
#AskDushyant
Note: The names and information mentioned are based on my personal experience; however, they do not represent any formal statement.
#TechConcept #TechAdvice #Server #Linux #ShellScripting #Disk #Slack #Telegram
Leave a Reply