Engineering
How we approached SEO for a B2B Linux security tool (without buying any backlinks)
· ~9 min read · Jamie, Founder, Blackglass
We shipped the SEO surface for blackglasssec.com in two passes over a week. Bucket A was the obvious P0 stuff every B2B site needs: canonicals, structured data, OG images, a real sitemap. Bucket B was the engineering scaffolding that keeps it from rotting: unit tests for the schema factories, a smoke test that grep’s every marketing page for the contract, and a strategy doc so future contributors don’t re-relearn the same lessons.
The headline was learning that Next.js’s metadata system doesn’t deeply merge page-level openGraphinto the layout’s — page wins, layout dies. We had pages with rich titles and descriptions but no og:image for two days because the layout image was being silently dropped. The fix was a one-liner per page (images: defaultOgImages()) but the test that catches it next time is the more interesting artefact.
Why structured data first
For a B2B Linux tool, the audience that finds you through search is overwhelmingly people researching a category — “Linux drift detection”, “file integrity monitoring tools”, “sshd_config audit”. They’re going to read a SERP page and decide whether to click through based on three things: title, description, and any rich snippets Google chooses to render (FAQ accordion, breadcrumb, price band, review stars).
Most of those rich snippets are gated on schema.org structured data. So before optimising copy or chasing keywords, we shipped:
WebSite+Organizationat the layout level so the brand has a Knowledge Graph entity.SoftwareApplicationon /product so we’re eligible for the security-tool carousel.Product+Offerper pricing tier on /pricing so prices can render in SERPs without scraping.FAQPageon the same /pricing page so the existing 15-question FAQ has a shot at the accordion-style snippet.HowToon the practical guide at /guides/how-to-detect-unauthorized-linux-config-changes — eligible for the steps carousel.BreadcrumbListon every page so SERPs show the trail instead of just the URL.
Every emitter is a typed factory in src/lib/seo.tswith unit tests against the required fields. We don’t hand-roll JSON-LD blocks anywhere; they go through a single <JsonLd /> wrapper component that handles suppressHydrationWarning and gives each block a stable id for DOM debugging.
Per-route lastmod, not a single build timestamp
The default Next.js sitemap example uses new Date() for every URL. Every URL gets the same <lastmod>, the freshness signal collapses, and a typo fix in a utility makes the entire site look stale-then-fresh.
We resolve lastmod per route via git log -1 --format=%cI -- <file> with a file-mtime fallback for ephemeral build environments. It’s ~30 lines of code in src/app/sitemap.ts and Google sees per-page freshness exactly as we intend. For a marketing site that has a high-touch /changelog and otherwise-stable legal pages, that distinction matters.
Dynamic OG images via a single edge endpoint
Next.js supports both per-route opengraph-image.tsx and a single endpoint approach. We chose the endpoint at /api/og?title=…&subtitle=… for three reasons:
- Brand styling lives in one file. Future rebrand is one PR, not 30.
- Pages opt in by passing two strings — no extra file per route.
- The CDN cache key is the URL, so any title change naturally invalidates the cache. No revalidation rituals.
The static /og-default.pngremains as a fallback for pages that don’t opt in (legal pages, redirect targets). The four flagship pages — home, /pricing, /product, the how-to guide — all use the dynamic version because their share-card title is the actual selling line.
The test that catches the regression we shipped
Bucket B added tests/unit/marketing-page-seo.test.ts: a smoke test that walks every page.tsx under src/app/(marketing)/ and asserts the contract per page. Canonical declared. OG image included if the page overrides openGraph. <h1> rendered. No raw <script type="application/ld+json"> tags (must use the wrapper component).
It catches the regression I shipped on day one, plus three others I hadn’t noticed: the four /tools/* pages had <h2> for the page title with no <h1> at all, and the auth surfaces (/sign-in, /sign-up) had no noindex directive — they were eligible to compete with /recoverfor the “blackglass sign in” query.
The interesting bit is the per-route exceptions: home, demo subpages, /sign-in/[[...sign-in]], /sign-up/[[...sign-up]], /passphrase-recovery, and /pricing/successeach opt out of one or more checks with a documented one-line reason. The reason surfaces in test failures so a future contributor either accepts the opt-out or challenges it. Documenting the “why” in code is one of the small habits that compounds.
What we deliberately didn’t do
For an early-stage B2B tool, the temptation is to chase tactics that look like progress but actually don’t move the needle:
- No backlink buying or PBNs. The audience is security engineers; if they smell a fake DR boost they will judge the product by it.
- No AI-generated SEO content.Engineers recognise it instantly and bounce. We’d rather ship four good pages than forty mediocre ones.
- No keyword-stuffed comparison pages. The /vs pages are honest about where competitors fit and where Blackglass does. Most prospects end up keeping their CNAPP and adding us, not switching.
- No console / app routes in the sitemap. The authenticated app is
noindexat the route group level. Indexing it would just teach Google about UI it can’t see.
What’s next
The marketing-side follow-up — Search Console verification, sitemap submission, manual URL inspection requests, social-cache flushes — lives on our private follow-up canvas because it requires our Google account. The engineering side is mostly done; future work is opportunistic — drafting a comparison page when a real prospect asks for one, publishing a use-case page when we hit a category we can speak to with credibility.
If you found this useful and want to see the actual code, drop us a line — we’ll happily walk you through the patterns. The whole audit is also documented in docs/seo.mdin the repo (semi-public; ask if you’d like a pointer).
Try the product the post is about
Blackglass watches Linux fleets for configuration drift, exports auditor-grade evidence, and includes an optional cloud-waste cleanup add-on. 14-day trial, no card.