Back

Blog

Blog

News

News

Mar 17, 2025

GitHub Actions Supply Chain Attack: What Happened and How We Detected It

Frank Lyonnet

Introduction

A critical software supply chain attack – an attack targeting a widely used component in the development pipeline – hit the GitHub Actions community on March 14, 2025. A popular GitHub Action (reusable CI/CD workflow component) called tj-actions/changed-files was compromised, putting thousands of continuous integration pipelines at risk​. In this attack, the action’s code was altered to leak secrets (like API keys and tokens) from CI/CD pipelines by printing them in the build logs​. Because many of these pipelines are public, the sensitive secrets were exposed for anyone to see, potentially leading to unauthorized access if used maliciously.

Fortunately, the incident was quickly detected in real time by StepSecurity’s Harden-Runner tool, which flagged an unusual network call during a workflow run​. The security community and GitHub responded rapidly – the malicious code was identified and neutralized, and an official CVE (Common Vulnerabilities and Exposures) entry (CVE-2025-30066) was issued to track this incident​. This post breaks down what happened in plain terms, how it was detected and reproduced by EDAMAME, and what DevSecOps teams can learn to better secure their CI/CD pipelines.

Summary of the Incident

The tj-actions/changed-files action is used in over 23,000 repositories, helping automate the detection of changed files in GitHub workflows​. On March 14, 2025, attackers gained unauthorized access to this action’s repository and retroactively updated many of its version tags to point to a single malicious commit​. In simpler terms, even older versions of the action were suddenly redirected to run attacker-provided code. The compromised code included a hidden, base64-encoded payload (a piece of data encoded to conceal its true purpose)​. When a CI pipeline ran the changed-files action, this payload would decode into a Python script fetched from an external URL on gist.githubusercontent.com. This script searched the runner’s memory for CI/CD secrets (passwords, keys, tokens set as environment secrets) and then printed those secrets to the job’s log output.

Because many projects using this action are open source, their workflow logs are often public. That meant any secrets dumped by the malicious script were visible to anyone reading the logs​. The impact was severe: credentials like cloud keys, GitHub tokens, and others have been exposed. Notably, investigators found no evidence of the stolen secrets being sent to an attacker's server– instead, the threat was that the secrets were just lying there in the logs for opportunistic access.

How was it caught? StepSecurity’s Harden-Runner – a security agent for GitHub Actions runners – detected the attack almost immediately. Harden-Runner monitors the behavior of workflows (for example, watching network calls, file changes, and processes) and noticed an unexpected external endpoint in the action’s network traffic. In this case, the workflow attempted to contact gist.githubusercontent.com (where the malicious script was hosted), which was not an endpoint normally seen in this context. This anomaly triggered an alert. In essence, Harden-Runner acted like a security camera on the CI runner: as soon as the action tried to do something out of the ordinary (accessing an external URL that wasn’t on an allowed list), it was flagged​. This early detection allowed the team to investigate before more damage was done.

The broader security community quickly collaborated to diagnose the issue. Contributors like on GitHub dug into the action’s repository and pinpointed the exact malicious commit that had been inserted​. They discovered that all the version tags from v1.0.0 up to the latest pointed to a single commit – the attackers’ payload. This retroactive tag manipulation is what made the attack a supply chain compromise: everyone bringing in any version of the action would unknowingly run the attacker’s code. The malicious code, once decoded from base64, clearly showed the logic to fetch the external memdump.py script and dump secrets​, confirming the suspected behavior.

GitHub responded by removing the compromised action from the platform to prevent further usage. Within a day, the original repository was temporarily disabled and then restored to a safe state (with the malicious tags removed) once the issue was addressed. An official advisory was published, and the CVE was registered to ensure the incident is tracked and that users can lookup details about this security issue.

Implications for CI/CD Security

This attack has stark implications for the security of CI/CD pipelines and illustrates why supply chain attacks on development tools are so dangerous:

  • Widespread Impact of a Single Compromise: Because tj-actions/changed-files was widely used, a single point of compromise cascaded into thousands of downstream projects inheriting malicious code​. CI/CD pipelines often reuse shared actions and dependencies; if any one of them is compromised, it can impact every project that consumes it. In this case, any repository running the poisoned action automatically began dumping its secrets. It’s a vivid reminder that our CI/CD systems are only as secure as the weakest link in our supply chain.

  • Why was this attack possible? The attack was possible due to a combination of factors: implicit trust in third-party actions, broad access within the CI environment, and insufficient vetting. Maintainers of the action had a bot account with a PAT that got compromised​ – once the attacker had push access, they could introduce malicious code. Many projects reference GitHub Actions by tag (e.g. uses: tj-actions/changed-files@v35), assuming that tag points to safe code. The attacker abused this by retagging versions to the malicious commit, so even pinned version numbers were not safe​. Additionally, the GitHub Actions environment allowed the malicious code to perform highly sensitive operations: it was able to run sudo and read another process’s memory to grab secrets, which indicates a lack of least privilege in the default runner setup. Essentially, the CI runner was a fully privileged environment with internet access – ripe for abuse if a bad actor can execute code within it.

  • Risks of using public GitHub Actions: This incident showcases the supply chain risk of consuming community Actions. When you use a public GitHub Action in your workflow, you are running someone else’s code with your privileges. That code may have access to your repository, your cloud credentials, and other secrets. If the action is later tampered with (as happened here), your pipeline will blindly run the malicious updates. In this case, the malicious action exposed CI secrets (which could include cloud provider keys, signing keys, etc.), potentially leading to further compromise of infrastructure. Even if repository secrets are masked in logs, the attacker’s double-encoding trick defeated GitHub’s secret masking, demonstrating that relying on automatic masking alone is insufficient​. Moreover, an action with malicious intent could do more than just leak secrets – it could alter artifacts, inject backdoors into built code, or modify deployment logic. Using third-party actions thus introduces a significant trust dependency. Unfortunately, the very design of CI/CD (fast, extensible, plugin-based automation) means such dependencies are common – which is why attackers are increasingly targeting them.

  • Challenges in preventing similar attacks: Preventing this kind of attack is hard. Traditional security controls (like code review or scanning) often fall short: the malicious change here was obfuscated (base64-encoded script) and introduced in a trusted project, so it might not raise red flags until it’s running. GitHub’s own mechanisms did eventually identify and remove the compromised action, but only after it had been active for a while. The ephemeral nature of CI runners and the speed of deployments mean that malicious code can run and disappear before anyone notices. Also, many organizations lack security monitoring on CI systems – unlike production servers, CI runners typically don’t have endpoint protection or monitoring by default​. This “blind spot” has led to multiple software supply chain attacks in recent years​. Another challenge is that the commit or version pinning practices in CI are not always strict. As noted, if projects had pinned the action to a specific commit SHA, they would have been immune to tag retagging​. But pinning every action to a commit is often neglected for convenience, and even with pinning, one would need to update the pin to get security fixes – which is a management challenge. In summary, the dynamic and interconnected nature of CI/CD makes it difficult to completely prevent such supply chain attacks; instead, organizations need to assume compromise is possible and put detection and mitigation measures in place (defense-in-depth).

How the Threat was Detected

It’s worth understanding how this attack was caught, because it highlights a key practice for defending CI/CD pipelines. Monitoring network egress (outgoing network connections) – such as connecting to an unusual domain or a process trying to export data – was the key mechanism able to detect the threat.

GitHub-hosted runners are ephemeral (short-lived) and often not monitored by corporate security tools. In this incident, the connection to an unexpected domain was the red flag. Because the workflow shouldn’t be contacting gist.githubusercontent.com (an address not on the normal list of allowed domains for that action), the activity​ was immediately flagged. This gave security teams a heads-up to respond fast.

Reproducing and Detecting the Attack with EDAMAME Posture

The EDAMAME security team reproduced the attack in a controlled environment. EDAMAME specializes in verifying the security posture (overall security state) of devices and environments by using strict allow-lists (whitelists) of what is permitted. In our test, we ran a GitHub Actions workflow containing the malicious version of the changed-files action to observe its behavior. As expected, the action made an unauthorized network call in order to get a payload aimed at dumping secrets – mimicking the real attack scenario.

However, using EDAMAME’s Posture controls, our team had predefined whitelists that only allow approved activities and connections during a CI run. Think of this as a zero-trust approach: even within the CI job, only known-good processes and network destinations are permitted. When the malicious action tried to execute the memdump.py script and send out the secrets, EDAMAME’s policy detected it because it wasn’t on the approved list of behaviors.

The result of the reproduction test was that the attack was successfully detected. This demonstrates the power of an allow-list (whitelist) strategy: even if a bad actor slips malicious code into your pipeline, tools like EDAMAME can ensure that only pre-authorized connections (e.g. connecting to known build servers or internal APIs) are allowed, and nothing more.

The Importance of System Hardening and Deep Network Visibility

EDAMAME’s approach to CI/CD security focuses on both system hardening and deep network visibility. Instead of only instrumenting at the application layer, EDAMAME’s solution (EDAMAME Posture) performs an OS-level posture check and then engages a packet capture mechanism to monitor all network traffic from the CI runner. It works across platforms – supporting runners on Windows, macOS, and Linux alike – by using appropriate low-level packet capture drivers (npcap on Windows , and pcap on Linux and macOS). Once integrated into a workflow (as a setup step), EDAMAME Posture will record every outbound network request the job makes . This includes not just HTTP/HTTPS calls, but any TCP/UDP traffic, DNS queries, etc. The captured traffic is then analyzed (using tools like Zeek for network analysis) to detect suspicious patterns . For instance, EDAMAME can catch if a pipeline step is trying to perform DNS tunneling or contacting an IP that’s not on a trusted list, because it has full packet-level visibility into the runner’s communications. In addition, EDAMAME’s agent does a “posture check” on the runner system at the beginning of each workflow run – ensuring the OS settings are hardened, required patches are in place, no unexpected services or open ports are present and network segmentation is in place. This reduces the attack surface before the build runs.

In the context of the changed-files attack, EDAMAME’s packet-capture approach would similarly have caught the outgoing call to the GitHub Gist – even if the malware had used a non-HTTP protocol or some obfuscation, it would still appear in the packet log. By logging all outbound traffic, EDAMAME creates a complete audit trail of network activity, which is invaluable during incident response (you can retroactively see exactly what connections were made) . The use of Zeek (a powerful network security monitor) means it can detect subtle exfiltration techniques too – for example, spotting if secret data was encoded in DNS queries or if large amounts of data are being sent to an unusual host.

EDAMAME’s method provides deep network visibility and multi-OS support, complementing other tools with a comprehensive system lockdown. It effectively brings a Zero Trust philosophy into the CI pipeline: assume every build step could be malicious and verify/monitor everything. By capturing packets and continuously verifying system posture, EDAMAME aims to catch what others might miss and limit what damage malicious code can do.

Recovery Steps

This incident, while serious, provides valuable lessons. If you were using the compromised action or simply want to bolster your defenses, consider the following recovery steps and best practices:

  1. Stop Using the Infected Component (no longer necessary now that the repository has been fixed): If any of your workflows use tj-actions/changed-files, disable those workflow runs and replace that action immediately​. Always ensure you’re using a version of third-party actions that you trust. In general, prefer actions that are well-maintained or have a security review.

  2. Pin Dependencies to Immutable Versions: One reason this attack was effective is that it exploited mutable version tags (like vX.Y.Z tags that were changed maliciously). A best practice to counter this is to pin your GitHub Actions to a specific commit SHA or a tagged release that you control​. By pinning to a commit hash, you ensure that even if a tag is moved or a new version is released, your workflow won’t automatically run untrusted code. It’s a bit like freezing a software dependency at a known-good version.

  3. Audit Your Workflows for Affected Actions: Search your codebase for any references to tj-actions/changed-files. This can be done via GitHub’s search or other code search tools. Make sure you identify every place it was used, including older branches or templates, and update or remove those references.

  4. Review CI/CD Logs for Leaked Secrets: If you did use the compromised action, check your recent GitHub Actions run logs for any suspicious output. Specifically, look for large base64 strings or any lines where secrets (like AWS_ACCESS_KEY= or other tokens) appear in plaintext​. The malicious code would print secrets in a JSON-like format. Spotting these in your logs means those secrets were exposed​. For public repositories, assume anything in the logs has been seen and copy-pasted by someone by now.

  5. Rotate Exposed Secrets Immediately: For any secret that showed up in the logs (or any credentials you suspect might have been compromised), rotate them​. This means generate new keys, passwords, or tokens to replace the old ones. For example, if an AWS key leaked, go to your AWS console to disable that key and create a new one. It’s critical to do this quickly – once rotated, the old leaked credentials can no longer be used by attackers.

  6. Clean Up Compromised Build Logs: If possible, delete or make private any CI/CD logs that contained secrets​. This won’t undo the leak (attackers could have them already), but it prevents casual access or search engines from indexing those secrets. Removing the evidence from public view is a basic hygiene step after rotating secrets.

  7. Implement Runtime Security for CI/CD: This attack underscores that we cannot blindly trust every action or dependency in our pipelines. Introduce security tools like into your workflows to add runtime monitoring​. Those solutions will alert you to unusual behavior (and even block it) as it happens. This is especially important for detecting things like a build step suddenly making a network call to an unfamiliar server or a process trying to access sensitive files.

  8. Use Allow-Lists for Network and Devices: Strengthen your pipeline with a zero-trust mentality. Using EDAMAME or similar posture management tools can ensure that even if a secret is stolen, it cannot be misused from an unrecognized machine or location. For instance, you can restrict that certain deployment keys only work from your company’s network or from your official CI runners. By whitelisting approved devices and endpoints, you add a safety net: an attacker grabbing a secret string alone won’t be enough to compromise your systems further.

  9. Stay Informed and Apply Updates: Keep an eye on security feeds (like GitHub Security Advisories or CVE bulletins) for any issues in the plugins/actions your team uses. If a critical issue hits something in your toolchain, treat it with the same urgency as you would a production incident. In this case, the community and GitHub moved fast – make sure your team is plugged in so you can respond quickly too.

  10. Practice Incident Response for CI/CD: This incident is a reminder to have a response plan specifically for your build pipeline. Know ahead of time how to disable a compromised component, how to revoke credentials, and how to communicate to your team or users about the issue. The faster you can do this, the less window an attacker has to exploit leaked information.

By following the above steps, you can significantly reduce the risk from this kind of attack and improve your overall CI/CD security posture.

Conclusion and Key Takeaways

The tj-actions/changed-files compromise is a stark reminder that CI/CD pipelines are part of our software supply chain and are high-value targets for attackers. In this case, the attacker didn’t target source code or a production system directly – they went after a third-party tool in the build process, knowing it had wide usage. The impact was the exposure of secrets, which could have led to further breaches (like unauthorized access to cloud accounts or package registries) if not quickly addressed.

However, this incident also highlights how the right security measures can turn a potentially catastrophic attack into a managed risk. Real-time monitoring by tools can give an early warning, and strict security posture enforcement by solutions like EDAMAME can contain the blast radius if something goes wrong. For DevSecOps teams, the event offers a clear blueprint:

  • Invest in CI/CD security as you would in production security. This includes adding monitoring, using least-privilege principles for credentials in pipelines, and keeping dependencies updated and pinned.

  • Adopt a defense-in-depth approach. No single tool or practice is foolproof, but multiple layers (code review of third-party actions, runtime monitoring, network allow-listing, device trust, etc.) create a robust shield where if one layer fails, others will catch the issue.

  • Make security user-friendly for developers. The solutions that worked here did so with minimal friction – a lightweight agent in a workflow, an automatic whitelist mechanism – which means developers can keep their velocity. When security is baked in seamlessly, it’s more likely to be embraced and effective.

Finally, the practical takeaway is one of vigilance and preparedness. Supply chain attacks can be sneaky and sophisticated, but with the community collaboration and modern DevSecOps tools at our disposal, we can detect and mitigate these threats before they wreak havoc. By securing our CI/CD pipelines now, we protect not just our code, but the credentials and infrastructure that code relies on. In an age where automation is king, let’s ensure our automation is trustworthy and resilient against attacks.

Stay safe, keep your pipelines hardened, and remember that every link in the supply chain matters – from code to build to deployment. With lessons learned from incidents like this, we can all strengthen our defenses and continue to deliver software swiftly and securely​.

Frank Lyonnet

Share this post