Attack surface monitoring gets pitched as a continuous security score — a number that goes up when you're more secure and down when you're not. That's a product category, not a capability.
What attack surface monitoring actually does, when implemented as a technical tool rather than a compliance dashboard, is answer a specific question: what can an attacker see and interact with from the outside, and what can they do with it?
Here's what that looks like in practice.
What "attack surface" actually means
Your attack surface is everything an attacker can observe, probe, or interact with without requiring authentication or insider access. For a modern engineering team, this includes:
- Exposed endpoints — APIs, admin interfaces, management ports reachable from the internet
- Infrastructure signals — TLS configuration, HTTP security headers, CORS policies, server version disclosure
- CI/CD exposure — GitHub Actions workflows, public repositories, dependency chains
- Web3 interfaces — blockchain node RPC endpoints, wallet connect interfaces, on-chain contract ABIs
- AI/LLM endpoints — exposed model APIs, inference endpoints, embedding services
The attacker's goal isn't to find something obviously broken. It's to map what exists, understand how components relate to each other, and find the path from a low-level observation (server discloses a specific software version) to a high-level outcome (code execution on that server using a known CVE for that version).
Passive observation vs active probing
There's an important distinction in how attack surface is discovered:
Passive: Everything any internet user could observe by making normal requests. TLS handshake inspection, HTTP response header analysis, DNS resolution, CORS origin probing with a realistic browser origin. No unusual payloads, no authentication testing, no fuzzing. Safe to run against any target you're authorized to assess.
Active: Crafted requests to discover non-obvious behavior. Sending alg:none in a JWT to test algorithm validation. Probing for hidden API endpoint paths. Testing specific authentication bypass patterns. Requires explicit permission from the asset owner.
Most attack surface monitoring operates in the passive zone. The valuable insight isn't just "here are your open ports" — it's the interpretation layer. What do these observations mean when combined?
What it actually finds
Here are findings from real passive scans with identifying details removed:
CORS misconfiguration + JWT algorithm confusion = credential theft path
Individual findings:
- CORS:
Access-Control-Allow-Origin: *withAccess-Control-Allow-Credentials: true - JWT:
/auth/tokenendpoint acceptsalg: nonetokens
In isolation, both are medium severity. Combined: an attacker who can place a page on any origin can trigger a cross-origin request to the API, which reflects the wildcard with credentials. The resulting session token can be forged (alg:none) to impersonate any user. Full account takeover, no credentials required.
A finding list doesn't surface this. Attack path reasoning does.
Deprecated API version + version disclosure = known CVE targeting
Individual findings:
- HTTP response:
Server: Apache/2.4.49 - URL probing:
/v1/and/api/beta/paths both return 200
Apache 2.4.49 has a publicly known path traversal and RCE vulnerability (CVE-2021-41773). The beta API version is the one still running it. The attack path: send the CVE payload to /api/beta/cgi-bin/... for arbitrary code execution.
This is why version disclosure matters beyond compliance. It's not about the information being public — it's that it tells an attacker exactly which CVEs to try, and on which endpoint.
GitHub Actions secrets exposure + self-hosted runner = lateral movement
Individual findings (passive + light active):
- Public repository with workflow files referencing
secrets.PROD_DATABASE_URL - Self-hosted runners registered at organization level
The workflow doesn't directly expose the secret. But a fork PR can trigger a pull_request_target workflow on a self-hosted runner that has persistent state and network access to internal systems. One malicious fork PR, one organization-level compromise.
Continuous monitoring vs point-in-time assessments
A pentest is a point-in-time assessment. It finds what's present on a specific day, through deep manual analysis of a defined scope. Extremely valuable. Also expensive and infrequent.
Attack surface monitoring answers a different question: what changed? An endpoint that wasn't exposed last Tuesday and is exposed this Tuesday is worth immediate attention, regardless of when the next scheduled pentest is.
Specifically, continuous monitoring catches:
- New services deployed to the wrong environment — a staging environment accidentally reachable from the internet, or from production traffic
- Configuration drift — a CORS policy that worked in dev broke in a prod config push
- CI/CD surface expansion — a new GitHub Actions workflow that grants overly broad OIDC trust
- Dependency changes — a library upgrade that enables a new endpoint or changes authentication behavior
- Accidental exposure —
.envfile included in a static site build
The pattern: teams ship fast, configuration drifts, attack surface grows. Monitoring is the feedback loop that catches the drift before an attacker does.
What Beacon does
Beacon is the open source attack surface scanner we maintain. It operates in three modes:
- ScanSurface — fully passive, safe by default. Makes only the requests any browser would make.
- ScanDeep — active probing with explicit permission confirmation.
- ScanAuthorized — exploitation-class checks (Kubernetes RBAC audit, GCP IAM analysis) requiring interactive authorization.
The difference from a checklist scanner: Beacon fingerprints the specific versions, frameworks, and interfaces it observes, then feeds that to an AI reasoning layer that connects findings across attack vectors. The output is attack paths, not finding lists.
# Install
go install github.com/stormbane-security/beacon@latest
# Passive surface scan
beacon scan --target api.yourdomain.com
# Deep scan (requires permission flag)
beacon scan --target api.yourdomain.com --deep --permission-confirmed
It's Apache 2.0, open source, and on GitHub.
What tooling can't replace
Automated attack surface monitoring is strong at finding misconfigurations, version disclosure, and common vulnerability patterns at scale. It's not designed for:
- Business logic flaws — authentication bypasses that require understanding how your specific application works
- Chained IAM privilege escalation — GCP or AWS privilege escalation paths that require understanding your cloud architecture
- Authorization issues — accessing data you shouldn't by manipulating object IDs
- Social engineering vectors — phishing targeting your CI/CD pipeline or development workflow
For those, you need a human assessor who understands your stack.
The combination that works: automated monitoring for continuous coverage and drift detection, manual assessment for the deep analysis that tooling can't reach.
If you want to understand your attack surface — starting with passive observation and moving to active probing with your authorization — that's what our starter engagement is designed for.