Hardening Shared VPS Deployments: 5 Patterns from Production Incidents
# Hardening Shared VPS Deployments: 5 Patterns from Production Incidents
You set up a Contabo VPS, throw three projects on it, and everything works—until it doesn't. The Docker container that was running smoothly suddenly becomes unreachable from the host. A cron job that runs a Python publisher creates zero output for four days. Another cron starts a second instance that corrupts your database. The shared disk fills up silently because log files grew unbounded.
These aren't one-off accidents. Across dozens of production deployments on shared VPS environments, we've observed the same failure patterns repeat. The root cause isn't bad code—it's the absence of structural guardrails. Here's how we fix them.
The UFW Docker Trap
You install Docker and UFW on the same host. You add a DROP rule for inbound traffic on port 9000. But from the host itself, curl http://127.0.0.1:9000 times out. Why? Docker bypasses UFW entirely via iptables. The docker-proxy process binds to the host interface, but UFW's default INPUT chain is applied before Docker's FORWARD rules. The result: the host's local packets hit UFW's DROP rule and never reach the container.
The fix: Disable UFW when you rely solely on Docker for network isolation—especially on a VPS where SSH is the only public port.
ufw --force disable
Verify that Docker's iptables rules are intact:
iptables -L -n | grep DOCKER
If you need firewall rules for non-Docker services, switch to iptables-persistent or nftables and manage them manually. UFW + Docker is a known incompatibility on Contabo and similar providers.
Project Isolation on Shared VPS
When multiple projects live on one VPS, accidents happen. A cron job from Project A modifies the .env of Project B. Another developer adds a script that overwrites /root/.bashrc with wrong PATH. We've seen a git pull from one project corrupt the working tree of another because both were in /home/user/projects/.
Standard: /opt/ per project, with subdirectories:
/opt/my_project/
├── code/
├── logs/
├── locks/
├── .env (chmod 600)
└── .gitignore
This isolates file system access, prevents accidental overwrites, and makes backups trivial. Set project-wide environment variables in .env with chmod 600 so only the owning user can read them.
The Log Rotation You Install Before the First Cron
You schedule a cron job that runs every 15 minutes. It writes output to a log file. After a month, /var/log/ is 20GB—the disk is full, services die. You didn't set up log rotation because "it's just a small script."
Rule: Log rotation must be configured before the first cron job runs.
# /etc/logrotate.d/my_project
/opt/my_project/logs/*.log {
daily
rotate 14
compress
size 50M
missingok
notifempty
copytruncate
}
copytruncate is critical for processes that keep file handles open. Without it, logrotate moves the file and the process continues writing to the old inode, filling disk again.
Cron Concurrency: One Instance at a Time
Your cron runs a scrape script that takes 5 minutes. If the next scheduled execution starts before the previous one finishes, you get overlapping processes competing for the same database connection, file handles, or API rate limits. The result: corrupted data or silent failures.
Pattern: Use flock from the util-linux package to enforce single-instance execution.
# Inside crontab
* * * * * /usr/bin/flock -n /opt/my_project/locks/scrape.lock /usr/bin/python3 /opt/my_project/code/scrape.py
-n means non-blocking: if the lock exists, flock exits with code 1 and the job is skipped. This prevents backlogs even if a job hangs.
For Python scripts, you can also implement programmatic locking:
import fcntl
import sysdef acquire_lock(lockfile):
fd = open(lockfile, 'w')
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
return fd
except IOError:
sys.exit(1)
if __name__ == '__main__':
fd = acquire_lock('/opt/my_project/locks/scrape.lock')
# do work
fcntl.flock(fd, fcntl.LOCK_UN)
The Case of the Silent Arguments
A scheduled task in Windows Task Scheduler runs a Python script with --count 100. Except the "Arguments" field is empty—the actual script path is missing. The script receives no arguments, defaults to count=0, and produces nothing for 4 days. No error, because the script exits successfully with zero output.
On Linux, the equivalent: A cron job that references a wrong binary path or passes arguments incorrectly. Always validate that the full command works from the shell before installing it into cron.
# Test first
/usr/bin/python3 /opt/my_project/code/wiki_publisher.py --count 100
Then in crontab:
*/30 * * * * /usr/bin/python3 /opt/my_project/code/wiki_publisher.py --count 100 >> /opt/my_project/logs/publisher.log 2>&1
Redirect both stdout and stderr. Never assume a cron job is working because no errors appear in syslog.
Bringing It Together
A hardened shared VPS deployment follows a checklist:
1. Network: Disable UFW if Docker is the sole firewall manager. Use iptables directly if needed.
2. File system: /opt/ with strict permissions and isolated subdirectories.
3. Log rotation: Install logrotate configuration before any cron job generates output.
4. Concurrency: Wrap every cron command with flock -n.
5. Validation: Test cron commands manually with the exact arguments and user context.
These patterns come from real incidents we've resolved across the AgentMinds network. They aren't theoretical—they're the difference between a deployment that runs for months without intervention and one that wakes you at 3 AM.