Non-human identities (NHIs) have exploded in number and importance. As defined by OWASP, “NHIs are used to provide authorization to software entities such as applications, APIs, bots, and automated systems to access secured resources”. NHIs can come in many types such as service accounts, passkeys, access tokens, SSH keys, API keys, and roles. They look like a string of garbled text such as ‘8dbf5d2a37c4178b4b03e6c49ae3f9e7’.
Cloud-native applications include numerous NHIs that enable communication, and govern access in distributed microservices applications. In fact, NHIs often vastly outnumber human users (by roughly 17:1 in typical organizations). Yet, they often lack the same level of security scrutiny. This gap has made NHIs one of the most overlooked attack vectors in today’s systems.
All of these NHIs act as independent workloads, without direct human intervention, with their own identities, each needing access to data and resources.
In response, OWASP has published the Non-Human Identities (NHI) Top 10, highlighting the most critical security risks related to NHIs. These include risks such as secret leakage, overprivileged NHIs, insecure authentication, and improper offboarding. Each of which can lead to serious breaches if left unaddressed. Attackers know that organizations struggle with managing machine identities effectively, making NHIs a prime target for credential theft, privilege escalation, and lateral movement within compromised environments.
In this article, we’ll review each of these top 10 NHI threats, explain their real-world implications, and discuss how you can mitigate them - an urgent challenge, given that only 15% of organizations in the NHI Management Group survey feel confident in their ability to secure NHIs. We’ll also demonstrate how Cerbos can help address some of these risks by enforcing fine-grained, contextual authorization rules for NHIs.
Improper offboarding refers to failing to deactivate or remove NHIs that are no longer needed. When a service account, API key, or other machine credential is no longer needed, it should be retired. This could happen when a service is decommissioned, credentials are rotated, or an engineer leaves the team. If the credential isn’t properly revoked, it becomes a lingering security risk. Unused or “orphaned” identities with valid credentials can be discovered and exploited by attackers to gain unauthorized access to systems.
Let’s imagine a cloud automation script has an access key that never got deleted after the project ended. If an attacker finds that key (through code repos, logs, or an old CI/CD pipeline), they now have a valid login to your environment. This is not hypothetical – such orphaned credentials have led to data breaches and unexpected cloud bills (e.g., attackers using forgotten cloud keys to spin up mining servers). The longer an unused NHI lingers, the more time attackers have to stumble upon it.
Asset inventory and lifecycle management is the first step. You should have processes to track all issued service accounts and keys, and routinely audit which are in use. When an NHI is no longer needed, revoke its access immediately.
Here, using a centralized authorization system like Cerbos can help by giving you a single point to manage permissions. While Cerbos itself doesn’t auto-delete credentials, it makes it easier to update or disable access for a given identity across your stack. For example, if a certain service account should be decommissioned, you can update Cerbos policies to deny any requests from that identity. In combination with cloud IAM and secret management, this ensures that even if the credential isn’t fully removed yet, it can’t be used to do anything. The key is to avoid leaving any “ghost” NHIs with open access.
This is the accidental exposure of sensitive NHI credentials (API keys, tokens, certificates, etc.) in places they shouldn’t be. This can happen at any stage of software development and deployment. For instance, a developer might accidentally commit an API key to a public GitHub repo, include a secret in client-side code, or log a sensitive token to an internal monitoring system. Once a secret “leaks” outside its intended secure store, it may be harvested by malicious actors. OWASP notes that leaked NHI secrets - whether hard-coded in source, left in config files, or pasted in chat - become susceptible to exposure and misuse.
The implications of secret leakage are severe. Any leaked credential is effectively an open door for attackers, who can use it to impersonate the service or account it belongs to. For example, if a database password with administrative access, or a cloud API key gets posted publicly, an attacker could use it to directly access the database or cloud resources. A notorious case involved attackers scanning GitHub for AWS secrets and then spinning up expensive EC2 instances on the victim’s account within minutes. In short, a leaked machine secret can lead to data breaches, system compromise, and financial loss very quickly.
Preventing secret leakage requires strict secrets management practices. This includes using secure secret storage (vaults), avoiding hard-coding secrets, scanning code repos and configs for keys, and rotating credentials regularly.
From an authorization perspective, Cerbos can’t stop a secret from leaking, but it can limit the blast radius if one does. By enforcing the principle of least privilege, even if an attacker obtains a certain service’s token, Cerbos policies ensure that the token can only perform the actions that the service was explicitly authorized to do - nothing more. Additionally, Cerbos’s audit logs of access decisions might help detect abnormal usage patterns if a leaked secret is being abused (e.g., a sudden spike in requests from an identity at odd hours). Still, the primary strategy here is to keep secrets secret – no amount of downstream authorization control can fully save you if an attacker is authentically posing as a valid service. As OWASP recommends, treat secrets like live ammunition: minimize their presence, use short-lived tokens, and automate the detection of any exposures.
Vulnerable third-party NHI refers to the risks introduced by third-party software or services that have access to your organization’s internal system. These third-party services may use NHIs created internally within your organization, in which case it is easier to manage these NHIs. But in some cases, the third-party services create their own NHIs which gives your security team lesser control over their use and management.
Modern development often involves a tapestry of third-party integrations – CI/CD platforms, cloud IDE extensions, SaaS connectors, etc. These tools frequently need access to your systems via API keys or tokens (for example, a CI service deploying to your cloud, or a browser plugin with access to your code repo). If one of those third parties is compromised, say a supply chain attack on an IDE plugin or a breach of a SaaS provider - the attacker can abuse the third-party’s credentials or the permissions granted to it.
In practice, this threat means that your security is only as strong as the security of the external services you trust with your NHIs. Here’s a real example: the Codecov bash uploader incident, where a popular CI tool was compromised to exfiltrate environment variables from CI pipelines. That resulted in countless secrets (tokens, keys) from various organizations being stolen by attackers. Similarly, a malicious VS Code extension could steal cloud tokens that a developer’s IDE uses for integrations. The fallout is basically a supply chain breach leading to unauthorized access into your systems.
Mitigation involves being very selective and careful with third-party software and the scope of access you give them. Conduct due diligence on vendors, monitor announcements for vulnerabilities in tools you use, and apply the principle of least privilege to third-party access. For instance, if you integrate a SaaS deployment tool, create a dedicated service account for it with only the minimal permissions required – don’t just hand it a full admin API key.
This is where Cerbos can assist. You can create specific policies for third-party services that strictly govern what they can do. If an external build service needs to call your internal API, you might give it a token that Cerbos will only allow to call certain endpoints (and nothing else). By isolating and constraining third-party NHIs through policy, you reduce the impact if those credentials are misused. Continual monitoring is also important – with Cerbos’s centralized logging, you could potentially spot a third-party identity doing something out of the ordinary and investigate early.
In the NHI context, insecure authentication means using outdated or weak authentication mechanisms for non-human identities. While human users have broadly moved towards modern auth (OAuth 2.0, OIDC, SAML, MFA, etc.), machine identities sometimes lag behind. OWASP highlights cases like using deprecated OAuth flows, non-standard homemade auth protocols, long-lived static credentials, or “app-specific” passwords that bypass multi-factor checks.
These approaches have known weaknesses. For example, older OAuth 1.0 flows or poorly implemented token schemes can be intercepted or forged, and static app passwords can’t be easily revoked or tied to a specific context.
If your microservices or APIs are authenticating each other insecurely, it opens the door to attackers. For instance, consider a service that accepts a simple API key over HTTPS for authentication. An attacker who intercepts or guesses that key can impersonate the service. Or a system using Basic Auth with base64-encoded credentials. Those could be obtained from logs or memory. In one scenario, a company might still be using a legacy token scheme on internal APIs that doesn’t enforce expiration or usage restrictions; if an attacker compromises one service, they could generate tokens to access others due to weak auth validation.
The solution is straightforward: adopt well-vetted, modern authentication standards for service-to-service communication. Protocols like OAuth 2.1 and OIDC exist for a reason – they offer tokenization, scoping, expiration, and a rich security model that’s much harder to exploit than legacy patterns. Every service and API should require strong identity proof for other services: signed JWTs with audience and issuer validation, mutual TLS where appropriate, and automatic secrets rotation. For example, use OIDC tokens for internal service auth instead of static passwords.
Cerbos fits in by consuming the outputs of these authentication steps – e.g., validating a JWT and extracting the service’s identity and claims. While Cerbos itself is focused on authorization (deciding what an already-authenticated identity can do), it assumes you’ve done the authentication right. By ensuring that only properly authenticated identities (with robust tokens or certs) even reach the authorization layer, you dramatically cut down the risk of impersonation. Use strong authentication standards, then let Cerbos and your policies handle the authorization.
Overprivileged NHI is the classic issue of granting a non-human identity more permissions than it actually requires. This often happens out of convenience or oversight. For example, assigning a service account a broad role like “Administrator” or reusing one service’s credential for multiple purposes. The danger is if that service is compromised (or the credentials are stolen), an attacker can now perform all sorts of actions, not just the limited ones the service needs. OWASP notes that attackers love to exploit excessive permissions in machine accounts, just as they do with human accounts.
Common examples include a microservice that only needs database read access but has write/delete rights, or an API key that grants access to an entire account rather than a single endpoint.
A breach in one small component could escalate to a full environment takeover because the token from that component was essentially a skeleton key. For instance, if a CI/CD server’s API token has overly broad cloud permissions, compromising the CI server means your entire cloud infrastructure is at risk. Least privilege is a well-known principle, but it’s hard to enforce without the right tools – especially as the number of NHIs grows, keeping track of who has access to what can get unwieldy.
This is a domain where Cerbos directly helps mitigate the risk. Cerbos allows you to implement fine-grained, least-privilege access policies for each service. Instead of giving a service account carte blanche, you define exactly which actions it can perform on which resources based on its SPIFFE identifier. For example, you might have an internal payments service that should only be allowed to read and write payment records, nothing else. You can codify that in a Cerbos policy and then have each request be evaluated for access before further handling:
apiVersion: api.cerbos.dev/v1
resourcePolicy:
version: v1default
resource: payment_service
rules:
- actions: ["read", "write"]
effect: EFFECT_ALLOW
condition:
match:
all:
of:
- expr: spiffeID(P.id).isMemberOf(spiffeTrustDomain("spiffe://example.org"))
- expr: spiffeMatchExact(spiffeID("spiffe://example.org/ns/privileged/sa/payments")).matchesID(spiffeID(P.id))
The above policy ensures that only the service with the specific SPIFFE ID spiffe://example.org/ns/default/sa/payments
is allowed to perform "read" or "write" actions on the payment_service
resource.
In effect, even if someone somehow obtained that service’s identity, they couldn’t use it to do anything outside of the payment service’s scope. Cerbos evaluates every request from that principal and will deny any action that isn’t permitted by the rules. This granular control dramatically limits the blast radius of a compromised NHI.
Operationally, to avoid overprivileged NHIs you should also regularly audit what permissions each service account has (IAM, database grants, etc.) and tighten them to the minimum necessary. But having Cerbos as a policy layer gives you a safety net and an easier way to manage those permissions in one place. By adopting a default-deny stance and explicitly opening only needed paths, you force attackers to work much harder - even if they get a foothold, they hit a wall when trying to expand their access.
These refer to weaknesses in how CI/CD pipelines and deployment tools authenticate to cloud services. Today, cloud deployments often involve automation. For example, your CI server or deployment tool needs credentials to push artifacts, run infrastructure as code, or call cloud APIs. The OWASP NHI Top 10 calls out that if these deployments use static, long-lived credentials (like hardcoded cloud API keys) or if they misconfigure identity tokens, it creates a vulnerability. Essentially, it’s an extension of secret management and auth, specific to the deployment process.
Let’s think about the following scenario. A Jenkins server has an AWS IAM user’s access key saved to deploy application code. If that key leaks via a CI log or a misconfigured repository, an attacker now has the keys to your production kingdom.
Another example is using an older custom script that stores a username/password for deployment – if that script is in a repo, anyone with repo access might get the credentials. Additionally, newer cloud-native CI/CD solutions offer OpenID Connect (OIDC) integration (like GitHub Actions does with cloud providers), which lets the CI pipeline obtain short-lived tokens instead of using static secrets. But if those OIDC tokens aren’t properly validated (e.g., if the cloud side accepts any token without verifying the issuer or audience), an attacker might forge a token to impersonate the CI workflow.
The implications are serious because CI/CD systems are often highly privileged (they deploy to production). A compromise there can directly lead to production compromise. For example, there have been incidents where attackers tampered with CI pipelines to inject malicious code or exfiltrate secrets as builds happen.
To mitigate this, treat your CI/CD like a critical identity. Use ephemeral credentials for deployments whenever possible. If your tooling and cloud provider support OIDC Federation (as many do now), prefer that over static API keys – OIDC tokens are short-lived and specific in scope. Also, enforce proper token validation: the cloud service should verify that the token for your CI service is intended for the right audience, and has not expired or been reused. Keep your build agents and pipeline code secure and updated, since they are part of your attack surface.
Cerbos can contribute by controlling what actions your CI service account can perform in your application. For instance, if the CI is calling an internal API to run database migrations or seed data, you can have Cerbos policies that allow the CI’s identity to do exactly that and nothing more. This way, even if an attacker gets a hold of the CI’s token for an internal API, they can’t, say, delete data or escalate privileges unless the policy allows it. Moreover, Cerbos policies could incorporate conditions on deployment environments or token claims if needed (advanced use), adding another layer of checks.
The bottom line is that your deployment infrastructure’s credentials should be as short-lived and tightly scoped as possible. And any access those credentials have should be guarded by strong authorization rules.
Long-lived secrets are credentials that remain valid for an extended period (often indefinitely) without rotation. These could be API keys, service account tokens, encryption keys, or even session cookies that don’t expire. OWASP highlights this as a risk because if any such secret is compromised, an attacker can continue to use it at their leisure, possibly for months or years, without interruption. Essentially, a long-lived secret is a high-value prize for attackers.
We’ve seen this problem play out in many breaches. For example, an organization might embed a database credentials file in an artifact; if that credential is never changed, a former contractor or a hacker who finds it years later could still get into the database. In cloud environments, there have been findings like “this AWS API key hasn’t been rotated in 5 years”. Such keys, if leaked, let attackers operate under the radar because the compromise might not be detected for a long time. Long-lived credentials also often slip through the cracks because people forget they exist.
The best mitigation is to eliminate long-lived secrets by design. Use short-lived credentials wherever possible. This means embracing things like OAuth access tokens with short expiration, cloud IAM roles that issue temporary credentials (e.g., AWS STS tokens), and certificates with expiration dates. If a secret must be long-lived (e.g., a symmetric encryption key), put rigorous controls around it (vault storage, limited usage) and consider rotating it periodically anyway. Automation can help here: set up jobs to rotate keys and restart services with new credentials seamlessly.
Cerbos naturally works well with ephemeral credentials. You can set up your system such that when a service wants to perform an action, it first obtains a fresh token (say a JWT with a 5-minute lifespan) identifying itself. Cerbos will then use the attributes of that token (like the service’s identity and claims) to make authorization decisions. If that token expires, any further requests will be denied until a new valid token is presented. Thus, combining short-lived tokens with Cerbos policies means even if an attacker steals a token, it has a very limited window of usefulness.
Additionally, Cerbos could enforce certain time-bound rules – for instance, you might include a token’s issued-at time as an attribute and write a policy rule that rejects tokens older than a certain age to cover scenarios where tokens don’t have expiration.
This is all about keeping credentials and identities separate across different deployment environments (dev, test, staging, production) and the threat when this isolation is broken. Often for convenience, teams reuse the same NHI (the same API keys, accounts, or certificates) in multiple environments. OWASP points out that using one identity across environments, especially mixing lower-security environments with production, can lead to major security issues.
The core problem is that non-production environments typically don’t have the same strict security controls or monitoring as production, so they’re easier to compromise. If an attacker breaches a dev or test environment and the NHIs are the same, they now effectively have a path into prod.
Let’s imagine that your testing environment has a copy of production data for QA purposes. It uses the same database credentials as prod, perhaps for simplicity in using the same config. An attacker manages to exploit a vulnerability on the test server (because maybe it’s less monitored or an engineer left a debug interface open). Now the attacker exfiltrates the shared database credentials from test – and walks right into the production database. This defeats the whole purpose of having a separate environment. Similarly, developers might hardcode a single API key in all configs just to avoid managing multiple keys; if any environment is breached, that key is blown.
To counter this, enforce strict separation of identities per environment. Development, QA, staging, and production should all have distinct sets of credentials and accounts. Even if the same microservice runs in dev and prod, it should use completely different service accounts in each. This way, a key compromise in one environment doesn’t immediately compromise another. Also apply network segmentation and access controls between environments (for example, prod should never accept connections directly from a dev environment machine using a dev identity). Monitoring and intrusion detection should treat any cross-environment access as suspicious.
Cerbos can help maintain environment isolation by using context in its authorization decisions. For instance, you could include an environment
attribute in the principal or resource information in each request. Then you might write a policy rule that says a service from env: dev
cannot access a resource labeled env: prod
. If somehow a dev service identity token is presented to a production API behind Cerbos, the policy would deny it because the environments don’t match.
Another approach is deploying separate Cerbos policy instances per environment with environment-specific policy sets, ensuring that even if someone tried to use a token from one environment in another, it wouldn’t be recognized. The Cerbos approach basically ensures that even if humans make mistakes by reusing credentials, the authorization layer will catch misuse across environment boundaries. Of course, the best case is not to reuse the credentials at all, but defense in depth via policy adds an extra safety net.
NHI reuse is the practice of using the same non-human identity across different applications, services, or components. It’s closely related to the previous (environment reuse) but here the focus is on reusing one identity in multiple places that may have different security profiles or purposes. OWASP warns that if one service using a shared identity is compromised, the attacker can leverage that identity to access all other services that trust it. Additionally, it makes incident response and attribution difficult – if the same API key is used in 5 systems and it’s abused, which system was the weak link that got breached? It’s hard to tell.
This issue often arises from convenience or a misunderstanding of the risks. For example, a team might create one “default” service account and then use that across dozens of microservices because it’s easier than maintaining separate accounts. Or they might copy-paste the same API key into multiple apps that call each other. We’ve seen breaches where an API key intended for an internal service was unintentionally left in a frontend app as well. An attacker pulling that key from the app’s code could then call all internal services that trusted that key.
The fix is clear: unique identities for each service or component. It might seem like overhead, but tools exist to manage identity issuance (like SPIFFE, Kubernetes service accounts, cloud IAM roles, etc.) so that you don’t have to manually create dozens of credentials by hand. Each microservice or job should have its own credentials, limiting the scope of what’s impacted if that credential is compromised. It also means you can rotate or revoke one identity without affecting others. If something fishy is happening with Service X’s account, you know it’s isolated to Service X (or that account specifically), which simplifies forensic analysis.
Using Cerbos naturally encourages good practices in this regard. With Cerbos, you can create service-specific policies. Instead of one catch-all policy for generic “internal services” identities, you can write policies tailored to each service’s role. For instance, you might have one policy for “analytics-service” and another for “payment-service,” each expecting a different principal identity. This goes hand-in-hand with giving each service a distinct identity to present when calling Cerbos. Consider the illustration below, which contrasts having a single policy for all internal services versus distinct policies per service.
On the left, if all services share an “internal service” identity with read-only access, you’ve not only limited them, but also tied them together – a breach of one could potentially abuse that common identity everywhere. On the right, each service (A, B, C) has its own policy and tailored permissions (one read-only, one read-write, one with special ops). This separation means Service C’s credentials won’t work to impersonate Service A or B, and each can be managed independently.
In practice, implementing this might involve using a system like Kubernetes’ service accounts or cloud IAM roles to issue unique credentials, and then configuring Cerbos to recognize those identities in its principal data. The slight upfront effort of identity management pays off massively in containment. Cerbos’s centralized view also makes it easier to audit which identities exist and what access they have, so you’re less likely to unknowingly reuse one.
The motto here is: one identity, one service. Keep them unique, keep them isolated.
This is when a person (a human operator) uses credentials or identities that are intended only for machines. This often happens during development or troubleshooting. For example, a developer might take a service’s API key and manually invoke an API “just to get something done,” or an engineer logs into a system with a service account because their own account doesn’t have access. OWASP flags this as a top 10 issue because it blurs accountability and can bypass safety controls.
When humans use machine identities, several problems arise: the human may gain higher privilege than their normal account would allow, actions performed will be attributed to a service (making auditing and forensics harder), and it sidesteps measures like MFA or approval workflows that would normally apply to a human user.
Consider this scenario: A database admin finds it easier to use the database’s “app user” account to run queries directly, instead of going through their read-only personal account. If that app user has broad rights, the admin now effectively has those rights with none of the usual checks (maybe the app user account isn’t tied to single sign-on or MFA). If this credential were abused or leaked, it would appear as if the application itself did something, confusing incident response. Attackers also love this scenario – if they compromise a human workstation and find stored service credentials, they can use those to move laterally, appearing as a normal service in the environment and avoiding detection for longer.
To combat this, organizations should have strict policies that humans should not be logging in with NHI credentials. Each user should use their own identity, and if they need elevated access for a task, it should be granted through a proper process (just-in-time access with auditing). Tools like break-glass accounts or privileged access management can provide emergency access without resorting to sharing service creds. Additionally, any use of a service credential should be tightly controlled and logged. If a service account must be used interactively (e.g., for a deployment script), it should be clear who triggered it, and that access should be ephemeral.
Cerbos can assist by differentiating between human and non-human principals in its policies. For example, you could tag requests coming from users vs. those from services (perhaps via a JWT claim or a principal attribute like principal.roles
or principal.attr.service_type
). Then, you can write rules such as “service identities can invoke X API endpoint, but cannot perform interactive actions Y and Z” or vice versa. If someone attempts to use a service token to perform an action that only a human admin should do, Cerbos would deny it.
Another angle is using Cerbos’s audit logs: every decision is recorded with the principal ID, so if you see a normally automated service ID being used at an odd time or performing an unusual action, that’s a red flag someone might be using an NHI illegitimately. In essence, Cerbos helps enforce the principle that machine credentials are for machines, and human access should go through human identities (with all the attendant oversight).
The OWASP NHI Top 10 is a wake-up call for security teams to give machine identities the same attention as human identities. We’ve covered the top 10 risks from improper offboarding of service accounts to humans masquerading as services. As we’ve seen, many of these issues boil down to visibility and control: knowing what NHIs exist, what they have access to, and being able to tightly govern their use.
This is where the Cerbos authorization solution provides a huge advantage. By centralizing authorization policies for all identities (human and non-human), you gain a unified way to enforce security best practices. Cerbos won’t replace secret vaults or identity providers. Instead, it complements them by making sure that even if an NHI credential exists, it can only be used in the ways you intend. We demonstrated how Cerbos policies mitigate risks like overprivilege, reuse, and environment bleed-through by encoding least privilege and context-aware rules. The result is a system where every service and component has just the access it needs - no more, and any anomaly stands out.
The key takeaway is to treat non-human identities as a critical part of your security posture. Map out your NHIs (from CI tokens to microservice accounts) and systematically address each of the OWASP Top 10 areas. Implement offboarding procedures, invest in secret management to prevent leaks, update your authentication methods, and enforce the principle of least privilege everywhere. Use tools like Cerbos to simplify these tasks and maintain that security posture as you scale. By proactively addressing these NHI threats, you’ll significantly harden your application’s defenses and sleep easier knowing that your machine-to-machine interactions are as secure as your human user logins.
Learn more about securing NHIs with Cerbos.
If you’re interested in implementing externalized authorization - try out Cerbos Hub or book a call with a Cerbos engineer to see how our solution can help streamline access control in your applications.
Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team
Join thousands of developers | Features and updates | 1x per month | No spam, just goodies.