Use case

File integrity monitoring that doesn’t scream at you

Most FIM tools either watch everything(and bury you in alerts every time logrotate fires) or watch nothing useful (a fixed list from 2014 that hasn’t kept up with how Linux actually ships). Blackglass watches the files compliance frameworks actually care about, and the ones attackers actually touch.

What “file integrity” should mean in 2026

PCI-DSS 11.5, SOC 2 CC7.1, ISO 27001 A.12.4, and SOX ITGC 1.4 all require “file integrity monitoring” — but none of them define what that actually means in operational terms. The honest answer is: detect unauthorised changes to the files that, if modified, would either (a) change the security posture of the host, (b) enable persistence, or (c) tamper with evidence. Everything else is theatre.

Blackglass treats FIM as a subset of a broader configuration-integrity story. Same scan, same baseline, same drift events — file hashes are just one signal among several deterministic checks.

What Blackglass actually monitors

  • SSH daemon & client config /etc/ssh/sshd_config, /etc/ssh/ssh_config, and any Include-merged fragments.
  • Identity & sudo /etc/passwd, /etc/shadow (hash only, not contents), /etc/group, /etc/sudoers, and every file in /etc/sudoers.d/.
  • Persistence authorized_keys per user, systemd unit files, cron entries (system + per-user crontabs), and PAM stack files.
  • Boot & kernel — GRUB config, kernel command line, sysctl entries, loaded kernel modules.
  • Hosts file & resolver /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf.
  • SUID/SGID binary set — full enumeration with per-binary hash, so a new SUID anywhere on the filesystem is a HIGH-severity event.
  • Web server & reverse proxy config — nginx, apache, caddy, haproxy main + included fragments.
  • Custom paths you add — point Blackglass at any additional file or directory on the host (per-baseline, per-host, or fleet-wide policy).

And, deliberately, what we don’t watch by default: log files, /var/lib/*, package caches, temp directories, runtime state. These churn constantly and are nobody’s real FIM target.

How it works

  1. Capture an approved baseline.First scan records the hash of every monitored file plus the metadata that matters (owner, mode, SUID, mtime). You confirm it’s the state you intend to defend.
  2. Run scheduled or push scans. Re-hash on the cadence you choose (hourly default for FIM-sensitive paths). The push agent surfaces changes within ~60 seconds for paths that matter.
  3. Severity from the field, not from ML. Drift on /etc/passwd is HIGH. Drift on a custom monitored path inherits whatever severity you pinned to it. Predictable.
  4. Acknowledge, assign, close. Each event has an owner, a due date, a note. The history exports to PDF + JSON for the auditor.

Sample FIM event

HIGH

/etc/sudoers.d/90-deploy

hash: absente3f1a8…

appeared 03:47 UTC · owner deploy · mode 0440

HIGH

/usr/local/bin/.svchost

new SUID binary · owner root · mode 4755 · hash 9a02bf…

Mapping to common compliance frameworks

  • PCI-DSS 11.5.2 — change-detection mechanism deployed; weekly comparison of critical files. Blackglass exceeds this with hourly default and per-event timestamps.
  • SOC 2 CC7.1 — system operations include monitoring of system components for changes. Drift events with operator approval workflow satisfy auditor expectations.
  • ISO 27001 A.12.4 / A.12.6— logging and vulnerability management. Drift exports double as the “evidence of change-control adherence” auditors ask for.
  • SOX ITGC 1.4 — change management evidence. Per-host evidence bundle ties baseline approval timestamp to subsequent drift events.
  • HIPAA § 164.312(c)(1) — integrity controls for ePHI systems. File-hash drift on configs that govern access (sshd, sudoers, PAM) directly applies.

Related use cases

14-day trial · up to 10 hosts · no card required