Guide
How to detect unauthorized Linux configuration changes
~12 min read · SSH hardening, drift detection, Linux security
Unauthorized or untracked configuration changes are one of the most common causes of security regressions and availability incidents on Linux infrastructure. This guide covers the problem, manual detection techniques, and how to build a scalable monitoring workflow.
1. What counts as an unauthorized config change?
An "unauthorized" change is any modification to system configuration that was not part of an approved change process. This includes:
- Direct edits to
/etc/ssh/sshd_config(or its Include fragments) outside of a configuration management run. - Changes to kernel parameters via
sysctlthat are not persisted to/etc/sysctl.d/. - New network listeners appearing on ports that were not in the approved service list.
- Package upgrades that revert a hardened configuration file to the package default.
- New user accounts or privilege escalation paths (sudoers changes, new SSH authorized_keys entries).
The word "unauthorized" matters more in regulated environments. In smaller teams, the concern is often simpler: a change happened that nobody documented, and now nobody knows if it is intentional or a mistake.
2. Why it's hard to detect manually
- No single canonical config source. Configuration is spread across dozens of files.
sshd_configcan include files from/etc/ssh/sshd_config.d/. Many tools only check the main file. - Volume. A 50-host fleet means 50 separate files to compare. At 200 hosts it becomes impossible to review manually on any meaningful schedule.
- No baseline to compare against. "The config changed" is only meaningful if you know what it changed from. Without a recorded baseline, you can only describe the current state.
- Transient changes. A
sysctlvalue changed at runtime (not persisted to disk) will not appear in a file-based audit.
3. Manual detection techniques
These approaches work for small fleets or one-off investigations.
Comparing sshd_config against a known baseline
# On the host — dump effective SSH config (includes all Include fragments) sshd -T 2>/dev/null | sort > /tmp/sshd_effective_now.txt # Compare against a previously saved baseline diff /var/lib/baseline/sshd_effective_baseline.txt /tmp/sshd_effective_now.txt
sshd -T is key here — it dumps theeffective configuration after processing all Include directives, not just the main file. Most ad-hoc audits miss this.
Checking sysctl values
# Dump all current kernel parameters sysctl -a 2>/dev/null | sort > /tmp/sysctl_now.txt # Compare against baseline diff /var/lib/baseline/sysctl_baseline.txt /tmp/sysctl_now.txt
Checking open listeners
# List all listening TCP/UDP services ss -tlnpu 2>/dev/null # Or: on older systems netstat -tlnpu 2>/dev/null
Checking system logs for SSH config reloads
# Look for sshd reload events in the last 7 days journalctl -u ssh.service --since "7 days ago" | grep -E "reload|reopen|restart|SIGHUP"
4. Tools for config change detection
Several categories of tool are relevant here, with different trade-offs:
| Tool type | Examples | Trade-offs |
|---|---|---|
| FIM (File Integrity Monitoring) | AIDE, Tripwire, auditd | Detects file changes but produces high noise; requires careful tuning |
| Config management | Ansible, Puppet, Chef | Enforces desired state on managed hosts; drift between runs is invisible |
| Benchmark scanners | CIS-CAT, OpenSCAP, Lynis | Point-in-time pass/fail; no change tracking over time |
| CSPM / cloud posture | Wiz, Orca, Prisma | Cloud-focused; limited Linux OS config visibility without agents |
| Dedicated drift tools | Blackglass | Baseline-vs-current comparison, severity classification, evidence workflow |
5. Building a sustainable workflow
Manual techniques stop scaling past a handful of hosts. A sustainable workflow needs:
- A recorded baseline for each host. Not just"what is the desired state" in version control, but "what is the actual current state on this host at this point in time." These diverge more often than teams expect.
- Automated, scheduled collection. Changes happen at any time. Daily or hourly scans reduce the window between a change and detection.
- Severity filtering. Not every config difference is worth waking someone up for. A workflow that does not filter by risk will be ignored quickly.
- Remediation tracking. Detection without a clear handoff to fix-and-document is only half the loop. The workflow must include ownership, due dates, and a way to close the loop with evidence.
- Evidence export. The end goal is often not just "is this fixed?" but "can I prove to an auditor that we detected it, responded, and documented it?" — structured export is essential.
6. How Blackglass approaches this problem
Blackglass is designed specifically around the drift detection and evidence workflow described above. Rather than trying to do everything, it focuses on:
- Agentless SSH collection — Blackglass connects to hosts over SSH to collect configuration metadata (effective SSH directives, sysctl values, open listeners, service states). No persistent agent to maintain.
- Baseline pinning — after a hardening pass or approved change, you capture the state as the new baseline. Future scans compare against that baseline, not an abstract ideal.
- Severity-classified drift events — changes surface with HIGH / MEDIUM / INFO severity based on the field and value affected. You see the before and after values inline.
- Remediation workflow — each event can be assigned, acknowledged, and closed with an operator note. The full lifecycle is recorded.
- Evidence bundles — export a dated, structured bundle (baseline, drift history, remediation records) for audit submissions.
Blackglass does not copy file contents, application secrets, or private keys into its storage — only the metadata needed to detect meaningful drift.
Related use cases and guides
14-day trial · up to 10 hosts · no card required