Defending Against Software Supply Chain Attacks: A Cross-Ecosystem Guide
24 October 2025
Publication date
Defending Against Software Supply Chain Attacks: A Cross-Ecosystem Guide
This content has been written based on the state of the ecosystems as of September 2025. Available mitigations will evolve in both native and third-party tooling, which could impact implementation guidance details over time, while the principles will remain relevant.
The objective is to provide a cross-ecosystem implementation guide for supply chain security, offering concrete and detailed configuration and command examples for npm, Python, Java, and Go. Recommendations are organized by maturity level with a risk comparison framework and control effectiveness analysis supporting the prioritization. This post aims to fill a gap in between major frameworks not providing concrete implementation steps beside governing principles, and the multiple isolated atomic actions provided in incident related publications that lack exhaustive coverage.
Targeted Audience
- Security teams managing polyglot environments (currently forced to read four or more separate guides)
- Tech leads needing prioritization guidance (frameworks like SLSA are often too abstract)
- Developers wanting practical commands (incident reports rarely include implementation details)
This guide is based on practical experience implementing security controls in development environments. Technical recommendations have been validated against incident reports, official documentation, and research from multiple sources credited in the references section. AI tools were used for clarity and consistency.
1. Understanding the Threat Landscape
Software supply chain attacks have evolved from theoretical risks to active, widespread threats across all major programming ecosystems. While recent high-profile incidents like the Shai-Hulud campaign have focused on npm, every major package ecosystem faces similar structural vulnerabilities.
These attacks succeed because they exploit implicit trust: Developers trust package registries, trust maintainers they've never met, and trust that the code installed today is the same as yesterday. Attackers have learned to weaponize this trust.
Common Attack Patterns
Account Compromise: Attackers steal maintainer credentials through phishing, credential stuffing, or session hijacking, then publish malicious versions of legitimate packages. The Shai-Hulud campaign used this approach in npm to inject payloads that harvested environment variables, SSH keys, AWS credentials, and self-propagated to connected systems. Similar attacks have occurred in PyPI (eg the "requests" typosquat attacks) and other ecosystems.
Dependency Confusion: Attackers publish malicious packages with names matching internal private packages, exploiting misconfigured package managers that check public registries before or alongside private ones. This attack pattern is ecosystem-agnostic and has affected npm, PyPI, and Maven users.
Typosquatting: Malicious packages with names similar to popular libraries (eg requests vs request, python-dateutil vs python-dateutill) trick developers into installing compromised code. This simple but effective technique works across all ecosystems.
Lifecycle Script Exploitation: Many package managers execute scripts automatically during installation. Malicious actors embed commands in these hooks to execute arbitrary code with the installing user's privileges. This is particularly dangerous in npm and PyPI, less so in Go (which doesn't support install scripts by design).
2. Supply Chain Attack Surface by Ecosystem
Understanding the relative security posture of the major different ecosystems helps prioritize defenses and understand where risk is concentrated.
Ecosystem Risk Comparison Matrix
Risk Rating Rationale (detailed)
>> Install Scripts Risk
npm - High Risk:
- postinstall,- preinstall,- installscripts execute automatically with full user privileges
- No sandboxing by default
- Widely exploited in recent attacks (Shai-Hulud, multiple other campaigns)
- Scripts can access environment variables, filesystem, network without restriction
pip - Medium-High Risk:
- setup.pyexecutes arbitrary Python code during installation
- Setuptools/distutils hooks (install,develop) run with user privileges
- Newer PEP 517/518 builds still execute build backends
- Slightly lower than npm because Python ecosystem has fewer packages with complex build requirements proportionally
Maven/Gradle - Medium Risk:
- Maven plugins (especially exec-maven-plugin,maven-antrun-plugin) can execute arbitrary code
- Gradle build scripts are executable Groovy/Kotlin code
- Risk mitigated somewhat by enterprise controls and more mature ecosystem practices
- Less frequently targeted than npm/PyPI (smaller attacker ROI)
Go - No Risk:
- Go modules fundamentally don't support install-time scripts
- All code execution happens at compile time, which is more visible
- This design decision eliminates an entire attack class
>> Default Script Execution
npm, pip, Maven - Enabled by default:
- All three execute lifecycle/build scripts automatically
- Users must explicitly opt out (--ignore-scripts,--no-build-isolation, etc.)
- This "default unsafe" posture contributes significantly to attack success rates (thus making the ecosystem more attractive for malicious actors)
Go - N/A: No such concept exists
>> Built-in Sandboxing
npm, pip, Maven - None:
- Install scripts run with the same privileges as the user running the package manager
- No filesystem isolation, network isolation, or capability restrictions
- Third-party tools (containers, bubblewrap, etc.) required for sandboxing
Go - N/A: No install scripts to sandbox
>> Registry 2FA Support
npm - Yes, mandatory for popular packages:
- 2FA required for packages with >1M weekly downloads
- Hardware key support (FIDO2/WebAuthn)
- Granular token scopes
PyPI - Yes:
- 2FA available for all accounts
- API tokens with scopes
- Increasingly encouraged by ecosystem
Maven - Limited:
- Maven Central uses Sonatype OSSRH
- Basic authentication, 2FA available but not enforced
- Enterprise repositories (Artifactory, Nexus) have better support
Go - Via hosting provider:
- No central registry like npm/PyPI
- Authentication depends on where modules are hosted (GitHub, GitLab, Bitbucket, etc.)
- Most Go modules use GitHub which supports 2FA, hardware keys
- Private modules use git authentication mechanisms
>> Package Signing
Maven - Yes:
- Central Repository supports GPG-signed artifacts
- Organizations can enforce signature verification
- Not widely adopted but infrastructure exists
Go - Integrity checks only (not signing):
- go.sumprovides cryptographic checksums that ensure build reproducibility and detect tampering during transit or in cache
- Critical limitation: Checksums verify what was downloaded matches what was published, but provide no guarantee that who published it was authorized or legitimate
- sum.golang.org module proxy acts as a transparency log, making it harder to serve different content to different users
- Does not prevent supply chain attacks via compromised maintainer accounts (the primary threat vector)
npm - Limited:
- npm 9+ supports package signatures via npm audit signatures
- Not widely adopted; many packages unsigned
- Infrastructure exists but ecosystem adoption is low
pip - No:
- No native package signing support
- Some distributions (Debian, etc.) sign their own Python packages
- PyPI itself doesn't enforce or facilitate signing
>> Namespace Control
npm, pip - Weak:
- First-come-first-served package names
- No verification of ownership/legitimacy
- Enables typosquatting and namespace confusion attacks
- Once you own a name, you own it (barring trademark disputes)
Maven - Strong:
- Group IDs follow reverse domain notation (com.company.project)
- Repository operators can verify domain ownership
- Significantly harder to squat legitimate namespace
- Enterprise repositories can enforce additional controls
Go - Strong:
- Imports use full URLs including domain (github.com/user/repo)
- Domain ownership verification implicit in module system
- Typosquatting requires domain registration
- More resistant to namespace confusion
>> Transitive Dependency Visibility
Go - Excellent:
- go mod graphshows complete dependency graph
- go.sumincludes checksums for all transitive dependencies
- Excellent tooling for analyzing dependency trees
Maven - Good:
- mvn dependency:treeprovides clear transitive visibility
- Well-established ecosystem tools (dependency-check, etc.)
- pom.xml structure makes relationships explicit
npm, pip - Medium:
- npm ls,- pip showwork but output can be overwhelming
- Transitive dependencies less visible in manifests
- Requires additional tooling for effective analysis
3. Defense Principles
These apply across ecosystems; implementation details and commands live in section 7. Controls are ordered by implementation effort and maturity.
Quick Wins (Essential - Level 1)
Low effort, immediate risk reduction. Ship these everywhere.
3.1. Disable Lifecycle Scripts by Default
Threat Addressed: Arbitrary code execution during package installation (e.g., preinstall/postinstall, setup.py/PEP 517 build hooks).
Impact: Stops install-time execution paths that recent dependency confusion and typosquat campaigns have relied on. Effectiveness depends on ecosystem support and your ability to avoid source builds.
Benefits:
- Prevents installer-triggered execution in CI by default.
- Forces explicit review of packages that need native builds or code generation.
- Reduces the number of execution paths in your supply chain.
Caveats:
- Some packages legitimately require build steps (native modules, headless browser bindings, database clients).
- You’ll need to maintain a minimal allowlist.
- Expect initial breakage in CI; cache warmup and per-repo tuning are required.
Ecosystem Applicability:
- npm/pnpm/yarn: In pnpm v10+, dependency lifecycle scripts are blocked by default; allow specific builds via pnpm.onlyBuiltDependenciesorpnpm approve-builds. Usenpm ci --ignore-scriptsfor npm; allowlist for packages likeesbuild,sqlite3,puppeteer. Example blocked output:npm WARN lifecycle <pkg>@<ver>~postinstall: ignored because --ignore-scripts is in effect.
- pip: No disable flag; use wheels-only installs (--only-binary=:all:) and a quarantined builder for sdists.
- Maven: No install scripts; arbitrary code executes via plugins. Enforce plugin allowlist/enforcer rules; block generic exec plugins in CI.
- Go: Not applicable; install hooks don’t exist. Related hardening: GOSUMDB,GOPRIVATE, strict version pinning.
3.2. Lock Dependencies and Commit Lockfiles
Threat Addressed: Unexpected version upgrades introducing malicious code, including in transitive dependencies.
Impact: Prevents silent dependency changes across the entire tree and enables reproducible builds.
Benefits:
- Locks the full dependency graph, not just direct dependencies.
- Makes dependency changes visible and reviewable in pull requests (including transitives).
- Enables reproducible installs across machines and CI runners, assuming stable registries and toolchains.
- Provides a point-in-time snapshot of your supply chain.
Caveats:
- Lockfiles must be committed to version control; uncommitted lockfiles provide no protection.
- Effectiveness depends on using the correct install commands (e.g., npm ciin CI, notnpm install).
- Requires discipline to review lockfile changes in pull requests, even for "just lockfile updates".
- Lockfiles can be large; use diff tooling to keep reviews tractable.
- Regenerating lockfiles without review creates a false sense of security.
Why manifests alone are not enough
Pinning in the manifest (e.g., "express": "4.18.2") does not constrain transitive ranges declared by your direct dependencies. The lockfile records exact resolved versions for the entire tree.
Example
Your package.json: express: "4.18.2" (pinned direct)
express's package.json: body-parser: "^1.20.0" (loose range)
Without a lockfile: next install may pull body-parser@1.20.5 (malicious).
With a lockfile: always installs body-parser@1.20.1 (the tested version).Defense in depth: manifest + lockfile
- Manifest pinning (e.g., package.json,pyproject.toml,pom.xml): explicit control and review of direct dependencies.
- Lockfile / resolved file (e.g., package-lock.json,pnpm-lock.yaml,yarn.lock,poetry.lock,Pipfile.lock,gradle.lockfile,go.sum): exact versions for the entire tree.
- Together: coverage across all dependency levels with reproducible installs.
Maximizing effectiveness
- Commit lockfiles: package-lock.json,pnpm-lock.yaml,yarn.lock,poetry.lock,Pipfile.lock,go.sum,gradle.lockfile.
- Use lockfile-enforcing commands in CI:- npm: npm ci(fails ifpackage-lock.jsonis missing or out of sync).
- pnpm: pnpm install --frozen-lockfile.
- yarn: yarn install --frozen-lockfile.
- yarn (Berry): yarn install --immutable.
- pip (with pip-tools): generate via pip-compile --generate-hashes, install withpip install --require-hashes -r requirements.txt.
- Maven: pin exact versions in dependencyManagement; do not allow ranges.
- Gradle: enable dependency locking and check in gradle.lockfile.
- Go: go build -mod=readonlyandgo mod verify.
 
- npm: 
- Fail fast on drift: CI should fail if the resolver wants to update the lockfile or manifest outside of approved update jobs.
- Review lockfile diffs: treat dependency changes (including transitive bumps) as code. Use lock-aware diff tools.
- Regenerate deliberately: run dedicated update jobs that refresh the lockfile, run tests, and produce a reviewed PR.
- Combine with scanning: locked versions still need vulnerability and provenance checks.
Ecosystem Applicability
- npm/pnpm/yarn: Commit the lockfile; enforce ci/--frozen-lockfilein CI; block lockfile writes outside update workflows; review lockfile diffs.
- pip: Python has no native lockfile in pip; generate requirements.txtwithpip-compile --generate-hashes, thenpip install --require-hashes -r requirements.txt. Avoid unpinned or sdist-only installs in CI.
- Maven/Gradle: Maven has no lockfile; pin all versions (including transitives) via dependencyManagementand enforce with enforcer rules. Gradle supportsgradle.lockfile; enable dependency locking and check it in.
- Go: go.modpins module versions;go.sumverifies content. Use-mod=readonlyandgo mod verifyin CI; fail on attempted writes.
3.3. Implement version cooldown periods
Threat Addressed
Zero-day exploitation of newly published malicious packages.
Impact
Creates a detection window so clearly bad releases are flagged or removed before they reach your builds.
Benefits
- Leverages ecosystem signal (researchers, scanners, registry defenses) before consuming a fresh release.
- Gives maintainers and registries time to deprecate or yank compromised versions.
- Simple to enforce via package manager settings, update bots, or repository managers.
How It Works
Package managers, update bots, or repository proxies delay installs or update PRs for versions younger than a configured age (for example, 48–168 hours). During that window, automated scanners and human review surface issues.
Example
In a recent campaign, malicious packages were publicly flagged and removed within a few days. A 72-hour cooldown would have kept those versions out of builds during that period.
Caveats
- Also delays legitimate fixes; maintain an exception path for critical patches.
- Adversaries can age a clean version, then ship a malicious patch later; cooldowns are not sufficient alone.
- Requires tuning per environment to balance agility and risk.
- First-publication attacks that predate your window can still land if exceptions are overused.
- Cooldowns only help if all installs flow through the same policy path; ad hoc local installs can bypass the gate.
Recommended Settings
- Development dependencies: 5–7 days by default; exceptions for internal packages.
- Production dependencies: 2–5 days depending on ecosystem risk.
- Critical security fixes: 0–1 day via explicit, audited exceptions.
Ecosystem Applicability
- npm/pnpm/yarn:- pnpm v10+: gate installs by package age in pnpm-workspace.yaml:minimumReleaseAge: 4320 # minutes; 4320 = 3 days minimumReleaseAgeExclude: - '@your-org/*' # immediate for internal packages
 
- pnpm v10+: gate installs by package age in 
- pip: no native age gate in pip; enforce via update bot PR delay (stabilityDays-style) and/or a curated internal index or proxy that blocks packages younger than the policy window.
- Maven/Gradle: enforce via repository managers that quarantine or delay newly published components until checks pass; pair with update-bot PR delay. Note: Such feature, like Artifactory Curation, is a paid add-on.
- Go: no native age gate in go; use update-bot PR delay and consume via a proxy or private mirror that applies age/policy checks.
- All ecosystems: use Renovate stabilityDaysor Dependabotcooldownto delay update PRs until a release is old enough. These do not block ad-hoc installs outside bot PRs.
3.4. Enable Multi-Factor Authentication and Token Hygiene
Threat Addressed: Account compromise leading to malicious package publication, data leakage or compromise of deployment environments.
Impact: Reduce the probability and impact of account takeover.
Benefits:
- Greatly reduces the probability and impact of account takeover for publishers and consumers.
- Limits blast radius if credentials are stolen.
- Creates an auditable trail for sensitive actions (publishes, permission changes).
- Aligns with baseline controls for any system that can influence production.
Caveats
- MFA protects identities, not code quality; combine with integrity, provenance, and review.
- Poor token hygiene (long-lived, over-scoped, stored in CI) cancels out MFA gains.
- Coverage varies by registry; enforce at your IdP and proxies when native features are uneven.
Critical Actions (Core)
- Enforce MFA for all accounts with publish or admin rights. Prefer hardware-backed methods (FIDO2/passkeys).
- Replace static credentials with short-lived tokens (OIDC/workload identity) in CI/CD; avoid storing secrets.
- Use least privilege scopes on all tokens; separate read, publish, and admin permissions.
- Rotate credentials on a schedule and on events (role changes, contractor offboarding, token disclosure).
- Do not expose publish-capable tokens in install-only jobs; separate build and publish workflows.
- Monitor for anomalous logins and unexpected publishes; alert on new devices, geo anomalies, and scope escalations.
Corporate Context
- Enforce MFA across SCM, CI/CD, artifact repositories, cloud consoles, and SSO.
- Require hardware-backed MFA for privileged roles (release engineering, build admins, SREs).
- Use OIDC/workload identity from CI to clouds and registries to mint short-lived credentials on demand.
- Centralize secret management; eliminate long-lived tokens in repos and pipeline variables.
- Audit all service accounts for least privilege and expiry; remove dormant automation identities.
- Apply identity analytics (new device, impossible travel, atypical IP ASN) to accounts that can publish.
Context
Multiple recent incidents, including campaigns that relied on compromised maintainer credentials, show that strong authentication and short-lived, scoped tokens would have prevented or materially delayed attacker actions.
Ecosystem Applicability
- npm: Require 2FA for publish/settings; use fine-grained access tokens only where necessary. Example: interactive publish with provenance npm publish --provenanceand org policy “2FA required for publish.”
- pip (PyPI): 2FA enforced for upload/management; prefer Trusted Publishing (OIDC from CI) to avoid storing API tokens. Fallback: scoped API token with aggressive rotation.
- Maven (Central): Publish via Central Publisher Portal with scoped tokens; enforce MFA at your SSO for the portal account. Keep deploy creds confined to the release step; rotate regularly.
- Gradle: Same principles as Maven; separate read vs publish repos, inject publish creds only for release tasks, and require MFA on the backing portal account.
- Go: No central publisher; enforce MFA at the VCS host and proxy. Prefer OIDC from CI to your proxy; avoid persistent basic auth (use short timeouts or exchange).
- All: Separate build/install from publish; use OIDC or short-lived tokens; never store publish creds in repository or global CI context.
Defense in Depth (Intermediate - Level 2)
These controls require more setup but provide significant additional protection layers.
3.5. Isolate and Sandbox Package Installation
Threat Addressed: Malicious install scripts accessing sensitive resources (SSH keys, cloud credentials, internal network).
Impact: Limits damage from successful installation of malicious packages.
Benefits:
- Prevents access to developer/CI secrets by default.
- Restricts or removes network egress during install to prevent exfiltration.
- Contains damage even if a malicious package is installed.
- Creates additional detection opportunities through monitoring.
Implementation Approaches (ordered by CI/CD compatibility):
- Container isolation (recommended for CI): Run installs in containers with read-only root filesystem. - Works on all CI/CD platforms.
- No special kernel capabilities required.
- Portable across environments.
 
- Linux namespaces (local development only): Use - bubblewrapor similar.- Requires user namespace support.
- Requires privileged Docker if running inside containers.
- Good for local development on Linux workstations.
 
- Dedicated VMs: Separate installation environment from development environment. - Strong isolation at higher operational cost.
- Clean environment each run; destroy after use.
 
- Network restrictions: Block network egress during installation phase - Can be combined with other approaches
- Requires proper cache setup for offline installs
 
Real-World Impact: Recent campaigns abusing install scripts attempted file grabs from paths like ~/.ssh/ and cloud credential locations. A clean $HOME, read-only mounts, and egress controls would have blocked those access paths entirely.
Caveats:
- Some legitimate packages need network access for downloads (though this is increasingly discouraged).
- Adds CI complexity and requires cache strategy for offline/egress-blocked installs.
- Sandboxing must be enforced on every install path (CI and developer machines) to be effective.
- May need per-package exceptions for legitimate build requirements.
Ecosystem Applicability:
- All ecosystems: Approach is package-manager-agnostic
- Effectiveness varies based on how package manager is invoked
Detection Opportunities:
- Alert on install-phase reads of credential paths (~/.ssh/*, cloud provider default locations) and unexpected outbound connections.
- Log and review install-time process trees in CI; sandbox escapes or shell spawns during install/build are high-signal.
3.6. Minimize and Audit Dependencies
Threat Addressed: Bloated dependency trees increase attack surface.
Impact: Reduces the number of potential compromise points in your supply chain.
Benefits:
- Reduces transitive dependency risk where attacks often hide.
- Easier to audit and monitor the dependencies you actually keep.
- Simplifies dependency update and vulnerability response.
Caveats:
- Requires ongoing discipline; dependencies creep back without review.
- Evaluation is subjective and time-consuming for large trees.
- Removing a dependency that's actually used will break your build.
Actions:
- Remove unused dependencies: Use static analysis to find packages imported in manifest but never referenced in code. Delete them.
- Evaluate necessity: Before adding a dependency, check if 10–50 lines of code would solve the problem. Trivial packages (left-pad, is-number) add risk for minimal benefit.
- Analyze transitive dependencies: Run npm ls <pkg>,pip show <pkg>,mvn dependency:tree, orgo mod why <module>before adding a new direct dependency. If it pulls in 30+ transitives, question the trade-off.
- Prefer well-maintained packages: Check last commit date, contributor count, and issue/PR response time. Dormant packages pose higher risk.
- Regular audits: Schedule dependency reviews quarterly. Look for packages added "temporarily" that became permanent, or transitives that ballooned since initial install.
Red Flags:
- Packages with 50+ transitive dependencies for simple functionality (logger, config reader).
- Unmaintained packages: no commits in 2+ years, unaddressed security issues, stale PRs.
- Brand-new packages (<6 months old) with low adoption (<500 weekly downloads). Wait for ecosystem vetting unless you're willing to audit the source yourself.
- Packages recently transferred to new maintainers with no public record of the transfer or maintainer identity.
- Obfuscated code, missing repository link, or mismatch between package name and actual functionality.
Dependency Counts in Context:
- Average npm package: 79 transitive dependencies (source: Socket.dev research, March 2023).
- Your 10 direct dependencies might resolve to 600+ total packages after transitives. Each one is a potential entry point.
Tooling by Ecosystem:
- npm: npm ls,depcheck,npm-check-updates,npm audit
- pip: pipdeptree,pip-audit,pip list --outdated
- Maven: mvn dependency:tree,mvn dependency:analyze,versions-maven-plugin
- Gradle: ./gradlew dependencies,./gradlew dependencyInsight, dependency-analysis plugin
- Go: go mod graph,go mod why,govulncheck
Ecosystem Applicability:
- npm/pnpm/yarn: High transitive counts are common; prioritize removal of direct dependencies you don't use. Run npm ls <pkg>to understand what pulls in large trees.
- pip: Transitive counts are lower but still material; use pipdeptreeto visualize. Many Python packages have C extensions that complicate evaluation.
- Maven/Gradle: Enterprise Java projects often have 100+ dependencies. Focus on removing unused direct dependencies and duplicates (mvn dependency:analyze, Gradle's dependency-analysis plugin).
- Go: go mod tidyremoves unused dependencies automatically. Usego mod why <module>to understand why something is included; if the answer is "none of your packages import it," remove it fromgo.mod.
When NOT to Remove:
- Security or bug fix transitives explicitly added to override a vulnerable version.
- Internal tooling or linters in devDependencies/devscopes that aren't shipped to production but provide value in CI.
- Peer dependencies required by other packages even if you don't import them directly.
Audit Workflow:
- List all dependencies (direct and transitive).
- For each direct dependency: confirm it's still used (grep imports/requires, check build output).
- For high-transitive-count packages: evaluate if the functionality justifies the tree size.
- Remove candidates, run tests, verify build.
- Document decisions so future reviewers understand why a package was kept or removed.
3.7. Pre-Installation Vetting and Analysis
Threat Addressed: Malicious or compromised packages entering your dependency tree.
Impact: Catches suspicious packages before installation; complements post-install scanning.
Benefits:
- Proactive defense that blocks bad packages rather than detecting them after the fact.
- Automated scanning reduces manual review burden for low-risk additions.
- Builds organizational knowledge of trusted vs. risky packages over time.
- Creates enforcement points for policy (approval workflows, risk classification).
Caveats:
- Vetting takes time; expect 5–30 minutes per new dependency depending on thoroughness. This blocks development velocity.
- Automated scanners have false positive rates and you'll need a triage processes.
- Brand-new packages have no history, download counts, or community vetting. You either wait (cooldown) or audit source code yourself.
- Urgent dependency needs (production incident, security patch) create pressure to bypass process; Have an exception path with post-approval audit.
- Vetting direct dependencies is tractable; vetting all transitives is not. Focus effort on direct dependencies and high-risk transitives flagged by tools.
Vetting Checklist (prioritize based on risk):
High-priority checks (do these for every new direct dependency):
- Package metadata: Maintainer count, download count (>10k/week preferred), package age (>6 months preferred), version history (avoid 0.x or frequent breaking changes).
- Lifecycle script inspection: Check for install/build scripts. If present, review what they do. Commands that touch network, filesystem outside build dir, or spawn shells are red flags.
- Automated scanning: Run vulnerability scanner (npm audit, pip-audit, govulncheck, grype) and supply chain risk tool (Socket, Snyk, Scorecard, ). Treat critical/high findings as blockers.
Medium-priority checks (do for packages with scripts or low downloads or recent):
4. Source repository: Verify repo exists and matches package. Check commit frequency, contributor count (>3 active contributors preferred), issue response time.
5. Dependency analysis: Check transitive count (npm ls <pkg>, go mod graph | grep <module>). If it pulls 50+ transitives for simple functionality, question the trade-off.
6. SBOM generation: Generate Software Bill of Materials for audit trail and compliance. Automate in CI rather than per-package.
Automated Scanning Tools (The list is not exhaustive and is based on personal usage, privileging free tools)
- OpenSSF Scorecard: Repository health (maintenance, security practices). Good signal for established projects; less useful for new packages. Scores <5 are concerning; >7 is good.
- Socket.dev: Detects supply chain risk patterns (install scripts, network access, suspicious maintainer changes). Free tier available; useful for npm/PyPI.
- Native package manager audits: npm audit,pip-audit,govulncheckcatch known CVEs. Run these first; they're fast and have low false positive rates.
- Snyk (free tier): Multi-ecosystem vulnerability and license scanning. Good CI integration; free tier sufficient for small teams.
Note: No tool catches everything. Recent supply chain attacks bypassed automated scanners for days. Layer tools (use 2–3) and combine with human review for high-risk additions.
Policy Recommendations:
Approval workflows:
- Low-risk packages (>100k weekly downloads, >2 years old, Scorecard >6): Automated approval after scan passes.
- Medium-risk packages (10k–100k downloads, 6mo–2yr old, or has install scripts): Peer review + security team notification.
- High-risk packages (<10k downloads, <6mo old, new maintainers, or failed scans): Security team approval required.
In corporate environments:
- Tie dependency additions to change management for production deployments (CAB review for new direct dependencies in release branches).
- Maintain an approved package list for common needs (logging, HTTP clients, CLI parsers) to avoid repeated vetting.
- Scan all packages in CI/CD on every build, not just at addition time. New CVEs are disclosed daily.
- Re-evaluate existing dependencies quarterly: check for new CVEs, maintainer changes, or drift from original use case.
Urgent exceptions:
- Document exception process in advance: Who can approve? What's the SLA for post-approval audit?
- Log all exceptions and flag for follow-up review within 48 hours.
- Consider temporary vendoring or forking for critical fixes while vetting completes.
Ecosystem Applicability:
- npm: Best tool support (Socket, Scorecard, Snyk all have strong npm coverage). Focus vetting on packages with install scripts or <10k weekly downloads.
- pip: Weaker tool ecosystem; Socket and Snyk cover PyPI but with higher false positives. Prioritize checking for setup.py and wheels-only availability.
- Maven/Gradle: Scorecard works for source repos; OWASP Dependency-Check for CVEs.
- Go: Scorecard for source repos, govulncheck for CVEs. Lower baseline risk due to no install scripts; focus on new modules and those using CGO.
Advanced Hardening (Mature - Level 3)
These controls require significant infrastructure investment and organizational maturity but provide comprehensive protection.
3.8. Continuous Monitoring and Anomaly Detection
Threat Addressed: Malicious behavior during or after installation that bypasses preventive controls.
Impact: Detects compromise in progress; provides forensic evidence and enables response.
Benefits:
- Catches attacks that evade vetting and scanning (novel malware, zero-days, insider threats).
- Provides forensic timeline for incident response (what was accessed, when, by what process).
- Enables rapid rollback before lateral movement or data exfiltration completes.
Caveats:
- High implementation cost; requires dedicated tooling, expertise, and ongoing tuning.
- Requires dedicated security engineering/monitoring capacity for ongoing tuning and triage.
- Install-time monitoring is hard: packages install in seconds, often in ephemeral CI containers.
What to Monitor (prioritized):
- Dependency changes: Alert when lockfiles change outside approved update workflows.
- Package manager audit failures: Fail builds on high/critical CVEs.
- SBOM drift: Compare build SBOM to production baseline; alert on unexpected packages.
- Network connections during install: Alert on outbound connections to non-registry destinations.
- File access monitoring: Alert on install-time reads of credential paths (~/.ssh/,~/.aws/,~/.kube/).
- Process spawning during install: Detect unexpected shell spawns or network utilities. High false positive rate; requires extensive tuning.
Monitoring Tools (non exhaustive list privileging free solutions):
- Lockfile/dependency monitoring: Git hooks, CI diff checks, GitHub dependency review action
- SBOM drift detection: Generate with Syft, compare with difforjqin CI pipelines, or Diffoscope.
- Runtime monitoring: Falco (container-focused), osquery (cross-platform), Wazuh (SIEM with FIM)
 Note: Falco requires Kubernetes or privileged containers. osquery is easier to deploy but needs custom query development. Start with lockfile monitoring and audit checks first.
Response Procedures:
Immediate (hours):
- Isolate affected systems, stop builds, revoke potentially exposed credentials.
- Identify scope: which packages, versions, systems.
 Containment (day one):
- Rollback to last known-good lockfile.
- Rotate credentials on affected systems.
- Block malicious packages in registries/proxies.
 Recovery (days):
- Forensic analysis: collect logs, preserve evidence
- Root cause: how did it enter, what controls failed
- Improve controls based on lessons learned
Ecosystem-Specific Indicators:
- npm: Monitor postinstallexecution, especially shell spawns. Alert on new lifecycle scripts in patch updates.
- pip: Monitor setup.pyexecution. Alert on packages suddenly requiring compilation.
- Maven/Gradle: Alert on new plugin additions, especially exec plugins.
- Go: Focus on go.mod/go.sumchanges and govulncheck failures. Alert on unusualreplacedirectives.
3.9. Build Security-Aware Development Culture
Threat Addressed: Human factors enabling supply chain compromises (bypass of controls, social engineering, process shortcuts under pressure).
Impact: Creates organizational resilience; makes security controls stick rather than being circumvented.
Benefits:
- Developers catch suspicious dependencies during review instead of after incident.
- Security controls are maintained under pressure (deadlines, outages) instead of bypassed.
- Faster incident response when everyone understands supply chain risks.
- Reduces reliance on security team as single point of failure.
Practical Actions:
- Make dependency changes visible and reviewable:- Add CODEOWNERSfor lockfiles and manifests (package.json,go.mod,pom.xml)
- Require approval from security team or designated reviewers
- Use dependency review bots (Dependabot, Renovate) that show changes in PRs
- Balance friction: auto-approve patches, fast-track security fixes
 
- Add 
- Train on specific risks- Show real incidents: walkthrough recent campaigns with actual indicators.
- Hands-on analysis: give developers malicious packages to examine (sandboxed).
- Red flags: how to spot typosquats, suspicious scripts, maintainer changes.
- Tool usage: run scans, interpret results, check package metadata.
 
- Embed checks in existing workflows:- Security checklist in PR templates
- Supply chain risks in design reviews
- Flag dependency changes in standups when cross-team impact
- Dependency metrics dashboard (CVEs, stale packages, recent additions)
 
- Blameless incident response:- Focus on "how did process fail" not "who approved it"
- Document lessons learned and update controls
- Recognize early reporters, even for false positives
 
- Clear policies with escape hatches:- Define when approval required and by whom
- Document exception process with realistic SLA
- Specify CVE response (patch vs. remove, who decides)
 
Caveats:
- Cultural change is slow; expect months before behaviors shift.
- Requires consistent leadership support; one executive bypassing process undermines everything.
- Over-policing or punitive reactions quickly undermine trust and participation.
- If exception process takes days, people bypass it entirely.
Ecosystem-Specific Focus:
- npm: Train on typosquats and install script review (high velocity, many micro-packages)
- pip: Focus on setup.pyreview (source builds common)
- Maven/Gradle: Emphasize namespace verification and plugin review (enterprise pace)
- Go: Cover replacedirectives and pseudo-version risks (lower baseline risk)
4. Prioritization Framework
Not all organizations can implement all controls immediately. Most should start with Level 1, consider Level 2 when basics are solid, and only pursue Level 3+ if you have dedicated security engineering capacity.
Level 1 - Essential
Ship these controls first. They provide the best risk reduction for effort invested.
npm/Node.js:
- Disable install scripts: ignore-scripts=truein .npmrc
- Lock dependencies: commit package-lock.json, use npm ciin CI
- Version cooldown: 3-7 days (pnpm minimumReleaseAgeor RenovatestabilityDays)
- Enable 2FA on npm accounts with publish access
Python/pip:
- Lock dependencies: use pip-tools or Poetry with exact versions
- Prefer wheels: --only-binary=:all:to avoid source builds
- Version cooldown: 3-7 days (Renovate stabilityDaysor registry proxy)
- Enable 2FA on PyPI accounts with publish access
Java/Maven-Gradle:
- Lock versions: exact versions in pom.xml (no ranges), enable Gradle lockfiles
- Verify checksums: Gradle verification-metadata.xml
- Block risky plugins: ban exec-maven-plugin unless explicitly allowed
- Version cooldown: 3-7 days (Renovate stabilityDaysor registry proxy with time-based filtering)
- Sign artifacts: enable GPG signing for anything you publish
Go:
- Use tagged versions: avoid pseudo-versions in go.mod
- Verify integrity: go mod verifyin CI with-mod=readonly
- Configure private modules: set GOPRIVATE, GONOSUMDB, GONOPROXY
- Version cooldown: 3-7 days (Renovate stabilityDaysor proxy with age filtering)
- Enable 2FA on VCS hosting accounts
Expected Outcomes:
- Reproducible builds that don't silently change
- Install-time code execution blocked or controlled
- Detection window before malicious packages reach your builds
- Account takeover harder (while not impossible)
- Your posture greatly improved already
Level 2 - Intermediate
Add these controls if you have security engineering capacity and Level 1 controls are working consistently.
All Ecosystems:
- Sandboxed installs: run package installation in containers with restricted filesystem/network
- Automated scanning: npm audit, pip-audit, govulncheck, or OWASP Dependency-Check in CI
- Code review for dependencies: add CODEOWNERS for lockfiles and manifests
- Regular dependency audits: monthly review of outdated or unmaintained packages
- Pre-installation vetting: formal review process for new direct dependencies
Scanning tools (Those are working but you can consider others):
- Free: OpenSSF Scorecard (repo health), native package manager audits (CVEs)
- Freemium: Socket.dev (supply chain risk), Snyk (multi-ecosystem CVE scanning)
Expected Outcomes:
- Isolation limits damage if malicious package runs
- Visibility into dependency health and drift
- Formal review process for risky changes
- Proactive identification of supply chain risks
Level 3 - Advanced
Only pursue this if you have full-time security engineering
All Ecosystems:
- Private registry with policy enforcement (Artifactory, Nexus, Athens)
- Runtime monitoring for credential access and network activity (Falco, osquery, or EDR)
- Network segmentation: build environments isolated from production
- SBOM generation: automated in CI, tracked over time
- Formal dependency review: security team approval (ensure it can scale or consider delegation within development teams) for new packages based on risk
- Developer training: quarterly supply chain security updates
Expected Outcomes:
- Defense in depth: multiple controls must fail for attack to succeed
- Detection even if preventive controls bypassed
- Audit trail and compliance evidence
- Security-aware culture that maintains controls
Level 4 - Mature (Ongoing)
If you reach this level then you'll be part of a small number.
All Ecosystems:
- Threat intelligence integration: consume feeds on malicious packages and campaigns
- Advanced analysis: reachability analysis, call graph analysis to determine exploitability
- Pipeline security testing: regular penetration tests of build and release infrastructure
- Contribution back: responsible disclosure of vulnerabilities found, tooling improvements
Metrics to Track:
- Time from CVE disclosure to patch deployed
- Percentage of builds with current SBOM
- Dependency freshness (median age of dependencies)
- Exception rate (how often process is bypassed)
5. Case Study: Shai-Hulud Campaign Analysis
Understanding real-world attacks helps validate controls and prioritize defenses.
Attack Timeline
September 2025: The Shai-Hulud campaign targeted the npm ecosystem through a coordinated supply chain attack:
- Initial Compromise: - Attackers compromised npm maintainer accounts through phishing campaigns spoofing npm authentication flows.
- Multiple maintainer accounts affected across different packages.
 
- Malicious Publication: - Compromised versions published to hundreds of packages (reports indicate 180-500+ affected packages).
- Packages had existing user bases and download counts (not new typosquats).
- Version numbers followed normal semantic versioning to avoid suspicion.
 
- Payload Execution: - Malicious postinstallscripts executed duringnpm install.
- Scripts harvested credentials from infected systems:- .npmrc files (npm tokens)
- Environment variables and configuration files
- GitHub Personal Access Tokens (PATs)
- API keys for AWS, Google Cloud Platform (GCP), and Microsoft Azure
- (There is no report of ~/.ssh/harvesting for this particular case but it's definitely an eligible target).
 
- Credentials exfiltrated to attacker-controlled endpoints (webhook.site).
- Stolen credentials uploaded to public GitHub repositories named "Shai-Hulud" under victim accounts.
- Self-propagation: Using stolen npm tokens, the malware authenticated as compromised developers, injected code into other packages they maintained, and published new compromised versions automatically.
 
- Malicious 
- Detection and Response: - Attack discovered September 14, 2025, through community reporting and automated scanning.
- GitHub immediately removed 500+ compromised packages from npm registry.
- npm registry blocked upload of new packages containing known malware indicators.
- Detection window: Malicious packages were live on the registry for a period between initial compromise and discovery, estimated at several days based on the scale of propagation.
 
Control Effectiveness Analysis
| Control | Effectiveness | Impact on Shai-Hulud | 
|---|---|---|
| Disabled lifecycle scripts | Would have blocked | Malicious postinstall scripts would not execute; payload delivery prevented | 
| Version cooldown (3-7 days) | Would likely have blocked | Several-day detection window suggests cooldown would have kept malicious versions out of most environments | 
| Locked dependencies | Partial mitigation | Prevented automatic upgrades; manual updates to compromised versions still possible | 
| 2FA on maintainer accounts | Would have prevented | Phishing-resistant MFA would have blocked credential theft via spoofed login pages | 
| Sandboxed installs | Would have mitigated | Container isolation with restricted filesystem access would have prevented credential harvesting from default paths | 
| Network egress filtering | Would have detected | Outbound connections to webhook.site during install would trigger alerts | 
| Dependency minimization | Reduces exposure | Fewer dependencies = lower probability of using affected packages | 
| Pre-installation vetting | Time-dependent | Automated scanners eventually detected malicious code; effectiveness depends on vetting before or after compromise window | 
| Continuous monitoring | Enabled detection | Community reporting and automated scanning detected the campaign within days | 
Key Lessons
- Layered defenses matter: No single control is perfect. Shai-Hulud would have been stopped by multiple controls proposed in this guide at different stages. 
- Prevention beats detection: Disabling lifecycle scripts (prevention) is more effective than monitoring script behavior during execution (detection). 
- Detection windows exist: Malicious packages were live for several days before removal. Cooldown periods can leverage this community detection window. 
- Account security is critical: Phishing campaigns targeting maintainer credentials remain a primary attack vector. Phishing-resistant MFA (hardware keys, passkeys) is essential. 
- Isolation limits damage: Even when malicious code executes, sandboxing and network controls can prevent credential theft and exfiltration. 
- Self-propagation is dangerous: The worm's ability to automatically spread using stolen tokens represents an escalation in supply chain attack sophistication. 
Similar Attacks in Other Ecosystems
PyPI (Python):
- Late 2023: Multiple typosquat packages targeting data science libraries
- Attack vector: Typosquatting + malicious setup.py
- Payload: Credential theft, similar to Shai-Hulud
- Prevention: Same controls apply (especially pre-vetting, isolation)
Maven Central (Java):
- 2021: Dependency confusion attacks
- Attack vector: Internal package names published publicly
- Payload: Code execution during build
- Prevention: Strong namespace control (group ID verification) + private registries
RubyGems:
- 2019: Strong_password gem compromise
- Attack vector: Maintainer account compromise
- Payload: Cryptocurrency mining, credential theft
- Prevention: 2FA, lifecycle script controls
Cross-Ecosystem Pattern: Account compromise → malicious package publication → automated installation → credential theft. The controls in this guide address each stage of this attack chain across all ecosystems.
6. Conclusion
Software supply chain security requires a comprehensive, multi-layered approach that spans prevention, detection, and response. While no single control provides complete protection, implementing even a subset of these recommendations significantly raises the bar for attackers.
What Matters
Those four controls provide 80% of the value:
- Lock your dependencies (lockfiles committed, enforced in CI)
- Disable install scripts (or strictly allowlist them)
- Version cooldown (3-7 days via Renovate, pnpm, or registry proxy)
- Require 2FA with hardware keys (on all accounts that can publish)
If you have security engineering capacity, consider those controls next:
5. Pre-installation vetting for new packages
6. Sandboxed builds in containers
7. Dependency monitoring and SBOM tracking
Consider runtime monitoring and private registries if:
- You have dedicated and full-time security engineering for this topic
- You handle regulated data or face active targeting
- You've already implemented everything above
Ecosystems' Readiness
npm/Node.js has the worst baseline security (install scripts everywhere, massive transitive trees), but also the best tooling (pnpm v10+ blocks scripts by default and good scanning tools).
Python/pip has poor native controls (can't disable setup.py execution) but lower ecosystem velocity. Use wheels-only installs and containerize builds. Don't expect pip to protect you.
Java/Maven-Gradle has mature enterprise tooling but weak central registry controls. Private registries (Artifactory/Nexus) are table stakes for organizations that care about Java supply chain security.
Go has the best design (no install scripts, strong checksums, domain-based namespacing) but checksum verification only proves integrity, not authorization. A compromised maintainer account can still publish malicious code that go.sum will faithfully lock.
How to start
Mindset:
- Update deliberately, not automatically
- Trust nothing by default; vet before install
- Scan for vulnerabilities AND behavioral anomalies
Now:
- Commit your lockfiles if you haven't already
- Add lockfile enforcement to CI (npm ci, pip install --require-hashes, go build -mod=readonly)
- Configure version cooldown (Renovate stabilityDays, or pnpm minimumReleaseAge)
- Audit who has publish access to your packages; enable 2FA
Next:
- Disable install scripts (npm --ignore-scripts, pip --only-binary)
- Add CODEOWNERS for dependency files
- Run a dependency audit; remove unused packages
- Add pre-installation scanning (Socket, Scorecard, or native package manager audits)
- Implement sandboxed builds for untrusted dependencies
Don't:
- Try to implement everything at once
- Add runtime monitoring before fixing basics
- Expect tools to solve cultural problems
Looking Forward
No framework, scanner, or process will catch every supply chain attack. Attackers are creative, patient, and increasingly sophisticated. The goal isn't perfect security but to be harder to compromise, and to limit damage when (not if) something gets through.
This guide gives you the tools. Using them consistently—especially under deadline pressure—is the hard part. Regular review and adaptation of your supply chain security posture is essential.
7. Implementation Guide by Ecosystem
This section provides concrete, actionable commands and configurations for each ecosystem. Recommendations are organized in the same order as Section 3 for easy cross-reference.
Important note:
- CI/CD environment: Provided implementation examples are given for GitHub. The GitHub Actions workflows shown use standard CLI commands (npm ci, npm audit, npx socket-cli, etc.). They can be translated directly to any CI/CD system by running equivalent shell steps or container jobs. The key concepts immutable installs, lockfile enforcement, and trusted identity tokens, apply universally.
- Corporate Context: In enterprise environments, enforce registry whitelisting, mirror internal packages through an artifact repository (eg Nexus, Artifactory or others), and integrate these workflows into centralized CI/CD templates. Trusted publishing via OIDC can be extended to corporate identity providers supporting workload federation.
7.1. Node.js / npm Ecosystem
Implementation Guidance
Quick Wins (Level 1)
Disable Lifecycle Scripts
npm configuration:
# One-time installations
npm ci --ignore-scripts
npm install --ignore-scripts
# Permanent configuration in .npmrc
echo "ignore-scripts=true" >> .npmrc
# Configuration check
npm config get ignore-scriptsWARNING: --ignore-scripts will break packages requiring native compilation like node-sass, sharp, sqlite3, node-gyp, fsevents, canvas, bcrypt, etc.
How to deal with those:
- Use pnpm v10+ with onlyBuiltDependenciesallowlist
- Build native modules in separate sandboxed step
- Use pre-built binaries where available
pnpm (recommended):
pnpm v10+ blocks lifecycle scripts by default and requires explicit approval for packages with build steps. This protection can be relaxed by config and may need per-package allows for native modules.
For pnpm v10+
// package.json - allow specific packages to run scripts
{
  "pnpm": {
    "onlyBuiltDependencies": ["esbuild", "puppeteer", "sqlite3"]
  }
}Interactive approval workflow (pnpm v10+):
pnpm install
pnpm approve-builds           # approve interactively
pnpm approve-builds esbuild   # or per-package
pnpm ignored-builds           # review what's been blockedUse pnpm approve-builds to curate the allowlist and pnpm ignored-builds to review what’s blocked. .pnpmfile.cjs still works in v10; disable it with --ignore-pnpmfile if you need a fully inert install.
Verification (pnpm v10+):
# List packages with lifecycle scripts in your tree
pnpm list --json --depth Infinity | jq -r '
  .. | objects | select(has("scripts")) | .name + "@" + .version
'
# Verify install works with blocks applied in CI
pnpm install --ignore-scripts --frozen-lockfileFor pnpm v9 and earlier (scripts run by default - manual blocking needed):
# .npmrc
# Block all scripts by default
ignore-scripts=trueThen use .pnpmfile.cjs for allowlist (when some packages need scripts):
// .pnpmfile.cjs
function readPackage(pkg) {
  const allowedPackages = ['esbuild', 'puppeteer', 'node-sass'];
  
  if (!allowedPackages.includes(pkg.name)) {
    // Remove scripts for non-allowed packages
    pkg.scripts = {};
  }
  
  return pkg;
}
module.exports = {
  hooks: {
    readPackage
  }
};Important: If your project requires pnpm v9 or earlier (pre-v10 behavior), you can restore script execution with:
{
  "pnpm": {
    "neverBuiltDependencies": []
  }
}⚠️ Not recommended - this disables the security protection.
Verification:
# Check which packages have lifecycle scripts
npm explore <package-name> -- cat package.json | jq '.scripts'
# Audit all dependencies for scripts
# Huge trees can possibly be hard for jq (--depth flag can help)
npm ls --json --depth=3 | jq '.. | .scripts? | select(. != null)'References:
Lock Dependencies
Implementation:
# Use npm ci to enforce lockfile
npm ci
# Generate initial lockfile
npm install --package-lock-only
    
#!! if you generate a lockfile (without installing) using the above command, then any preinstall script enforcing cooldown won't be executed. The generated lockfile could later be blocked in CI due to cooldown enforcement. The cooldown script provided later can be adjusted to check lockfile generated with npm install --package-lock-only to avoid such issue.  In package.json, use exact versions:
{
  "dependencies": {
    "express": "4.18.2",          // ✓ Exact version
    "lodash": "4.17.21"           // ✓ Exact version
  },
  "devDependencies": {
    "typescript": "5.3.3"         // ✓ Exact version
  }
}
// Avoid these:
{
  "dependencies": {
    "express": "^4.18.0",         // ✗ Allows 4.x.x updates
    "lodash": "~4.17.21",         // ✗ Allows 4.17.x updates
    "axios": "*",                 // ✗ Allows any version
    "react": "latest"             // ✗ Always uses newest
  }
}CI/CD enforcement:
# GitHub Actions
name: Verify Dependencies
on: [pull_request]
jobs:
  verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - name: Verify lockfile integrity
        run: npm ci
      - name: Check for floating versions (exact JSON)
        run: |
          node -e '
            const fs = require("fs");
            const pj = JSON.parse(fs.readFileSync("package.json","utf8"));
            const bad = [];
            for (const key of ["dependencies","devDependencies","optionalDependencies","peerDependencies"]) {
              if (!pj[key]) continue;
              for (const [name, ver] of Object.entries(pj[key])) {
                if (/^[~^*]|latest$/i.test(ver.trim())) bad.push(`${key}:${name}@${ver}`);
              }
            }
            if (bad.length) {
              console.error("Floating versions detected:\\n" + bad.join("\\n"));
              process.exit(1);
            }
          '
      - name: Ensure lockfile is committed
        run: |
          if ! git ls-files --error-unmatch package-lock.json > /dev/null 2>&1; then
            echo "::error::package-lock.json is not committed to git"
            echo "Lockfiles must be committed to version control for reproducible builds"
            exit 1
          fiNote on grep logic: Can flag comments/strings. For a bulletproof check, parse JSON in a tiny Node script; This grep is fine for most repos.
Important difference from Yarn:
- npm ci: Enforces lockfile by default (no flag needed)
- yarn: Requires explicit yarn install --frozen-lockfileflag
- pnpm: Uses --frozen-lockfileflag or auto-detects CI environment
References:
Version Cooldown
pnpm native support:
# .npmrc
# Delay installation of packages published less than 7 days ago
minimum-release-age=10080  # 7 days in minutes (7 * 24 * 60)Renovate bot configuration:
// renovate.json (JSONC allowed)
{
  "extends": ["config:base"],
  "dependencyDashboard": true,
  "packageRules": [
    {
      "description": "npm default cooldown and PR gating",
      "matchManagers": ["npm"],
      "stabilityDays": 7,
      // Choose ONE of these:
      // "prCreation": "status-success", // wait for CI green, then open PR
      "prCreation": "not-pending"       // open PR once checks aren't pending
    },
    {
      "description": "npm security updates bypass cooldown",
      "matchManagers": ["npm"],
      "matchUpdateTypes": ["security"],
      "stabilityDays": 0
    }
  ]
}Note: Removing "matchManagers" on either rule makes it global across ecosystems. That’s fine if you want uniform behavior everywhere.
Guardrail: If you enable both minimumReleaseAge and stabilityDays globally, set minimumReleaseAge shorter than stabilityDays (for example, 3d + 4d). If minimumReleaseAge >= stabilityDays on fast-moving deps, Renovate can starve updates entirely.
Understanding Renovate timing options:
Important: Use stabilityDays instead of minimumReleaseAge to avoid blocking frequently-updated packages:
- stabilityDays: Waits X days after a version is published before creating a PR for that specific version- Works with frequently-updated packages (will propose older stable version)
- If package updates daily, Renovate will propose the version from 7 days ago
- Provides cooldown window while allowing updates to flow through
 
- minimumReleaseAge: Filters out ALL versions newer than X days- !! Can deadlock frequently-updated packages (if package updates daily with 7-day setting, NO version is ever old enough)
- !! More aggressive but can prevent any updates for active projects
 
Recommended approach:
// For most projects - use stabilityDays only
{
  "stabilityDays": 7
}
// For high-security environments - use both with careful tuning
{
  "minimumReleaseAge": "3 days",   // Shorter window
  "stabilityDays": 4               // Total: 7 days but avoids deadlock for weekly updates
}Example scenario:
Package "fastify" releases daily:
- Jan 1: v4.0.0 published
- Jan 2: v4.0.1 published
- Jan 3: v4.0.2 published
...
- Jan 8: v4.0.7 published (latest)
With stabilityDays=7:
- Renovate will propose v4.0.1 (published 7 days ago)
- Updates flow through with appropriate delay
With minimumReleaseAge=7:
- No version is old enough (latest is always <7 days)
- No PRs ever created - deadlock!Renovate bot configuration:
// renovate.json (JSONC allowed)
{
  "extends": ["config:base"],
  "dependencyDashboard": true,
  "packageRules": [
    {
      "description": "npm default cooldown and PR gating",
      "matchManagers": ["npm"],
      "stabilityDays": 7,
      // Choose ONE of these:
      // "prCreation": "status-success", // wait for CI green, then open PR
      "prCreation": "not-pending"       // open PR once checks aren't pending
    },
    {
      "description": "npm security updates bypass cooldown",
      "matchManagers": ["npm"],
      "matchUpdateTypes": ["security"],
      "stabilityDays": 0
    }
  ]
}Note: Removing "matchManagers" on either rule makes it global across ecosystems. That's fine if you want uniform behavior everywhere.
Guardrail: If you enable both minimumReleaseAge and stabilityDays globally, set minimumReleaseAge shorter than stabilityDays (for example, 3d + 4d). If minimumReleaseAge >= stabilityDays on fast-moving deps, Renovate can starve updates entirely.
Understanding Renovate timing options:
Important: Use stabilityDays instead of minimumReleaseAge to avoid blocking frequently-updated packages:
- stabilityDays: Waits X days after a version is published before creating a PR for that specific version- Works with frequently-updated packages (will propose older stable version)
- If package updates daily, Renovate will propose the version from 7 days ago
- Provides cooldown window while allowing updates to flow through
 
- minimumReleaseAge: Filters out ALL versions newer than X days- Can deadlock frequently-updated packages (if package updates daily with 7-day setting, NO version is ever old enough)
- More aggressive but can prevent any updates for active projects
 
Recommended approach:
// For most projects - use stabilityDays only
{
  "stabilityDays": 7
}
// For high-security environments - use both with careful tuning
{
  "minimumReleaseAge": "3 days",   // Shorter window
  "stabilityDays": 4               // Total: 7 days but avoids deadlock for weekly updates
}Example scenario:
Package "fastify" releases daily:
- Jan 1: v4.0.0 published
- Jan 2: v4.0.1 published
- Jan 3: v4.0.2 published
...
- Jan 8: v4.0.7 published (latest)
With stabilityDays=7:
- Renovate will propose v4.0.1 (published 7 days ago)
- Updates flow through with appropriate delay
With minimumReleaseAge=7:
- No version is old enough (latest is always <7 days)
- No PRs ever created - deadlock!Dependabot configuration:
Dependabot's cooldown feature is generally available as of July 2025 and provides native support for delaying PR creation after new dependency releases are published.
# .github/dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "daily"
    
    # Native cooldown support - delays PR creation for new releases
    cooldown:
      default-days: 7              # Base cooldown for all updates
      semver-major-days: 30        # Cooldown for major version updates
      semver-minor-days: 7         # Cooldown for minor version updates
      semver-patch-days: 3         # Cooldown for patch version updates
      
      # Apply to specific dependencies
      include:
        - "*"                      # All dependencies
      exclude:
        - "@types/*"               # Type definitions excluded
        - "eslint*"                # Dev tooling excludedKey features:
- Cooldown delays PR creation for a specified number of days after a dependency version is published
- Configure different cooldown periods based on semantic version update type (major/minor/patch)
- Use includeto apply cooldown to specific dependencies or"*"for all dependencies
- Cooldown days must be between 1 and 90; include/exclude lists support up to 150 items each
- Available for all supported package ecosystems (npm, pip, bundler, composer, cargo, etc.)
- The excludelist always takes precedence over theincludelist
Critical constraints:
- If semver-major-days,semver-minor-days, orsemver-patch-daysare not defined, thedefault-dayssetting applies
- Cooldown affects both version updates AND security updates to manifest files, unless target-branchis used for non-default branch updates
- If a dependency appears in both includeandexcludelists, it is excluded from cooldown
Cooldown behavior:
Dependabot checks for updates according to the defined schedule. When a new version is published, Dependabot skips creating a PR until the cooldown period has elapsed. After the cooldown ends, Dependabot creates a PR for the latest version that is past its cooldown period.
Package "fastify" releases daily:
- Jan 1: v4.0.0 published
- Jan 2: v4.0.1 published
- Jan 3: v4.0.2 published
...
- Jan 8: v4.0.7 published (latest)
- Jan 15: v4.0.14 published (latest)
With cooldown default-days=7:
- Jan 8: No PR created (v4.0.7 is only 1 day old)
- Jan 15: PR created for v4.0.7 (now 7+ days old)
- Jan 22: PR created for v4.0.14 (now 7+ days old)
Result: Updates are delayed but eventually propose the latest versionBasic configuration (all dependencies):
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "daily"
    cooldown:
      default-days: 7
      # Omitting 'include' applies to all dependenciesAdvanced configuration (targeted dependencies with exclusions):
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "daily"
    
    # Conservative cooldown for most dependencies
    cooldown:
      default-days: 14
      semver-major-days: 30      # Extra caution for breaking changes
      semver-minor-days: 7       # Moderate for feature additions
      semver-patch-days: 3       # Quick turnaround for bug fixes
      include:
        - "*"                    # Apply to all dependencies
      exclude:
        - "@types/*"             # Type definitions can update immediately
        - "eslint*"              # Dev tooling can update immediately
        - "*-security-*"         # Security packages bypass cooldownHandling security-critical packages:
Since cooldown affects both version and security updates, use the exclude list to exempt packages that should update immediately:
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "daily"
    cooldown:
      default-days: 7
      include:
        - "*"
      exclude:
        - "@security/*"          # All security-scoped packagesComparison: Renovate stabilityDays vs Dependabot cooldown:
Both Renovate's stabilityDays and Dependabot's cooldown provide protection against supply-chain attacks by delaying dependency updates, but they differ in their approach:
Renovate stabilityDays: Proposes a version from X days ago. When a package releases daily, Renovate will propose the version published 7 days ago, even if newer versions exist. This ensures you're always installing a version that has aged, but means you may skip intermediate versions.
Dependabot cooldown: Delays proposing the latest version until it is X days old. When a package releases daily, Dependabot waits 7 days after each version's publication before creating a PR. This means you eventually get the latest version, but with a built-in waiting period.
Implications for frequently-updated packages:
Neither approach creates deadlocks with frequently-updated packages. Renovate continuously proposes older versions, while Dependabot queues updates with appropriate delays. Both strategies provide effective protection while maintaining dependency freshness.
Recommendation: For supply-chain security, both tools are effective. Choose based on your preference:
- Renovate's stabilityDaysif you prefer always installing battle-tested older versions
- Dependabot's cooldownif you prefer eventual latest versions with a security delay
For high-security environments, consider combining either tool with additional validation checks (such as npm preinstall hooks) to enforce cooldown periods even when lock files are generated manually.
Integration with pnpm minimumReleaseAge:
If using both Dependabot cooldown and pnpm's minimumReleaseAge, ensure they're coordinated:
# .github/dependabot.yml
cooldown:
  default-days: 7
# .npmrc
minimum-release-age=10080  # 7 days in minutes
# Both will enforce 7-day minimum, with pnpm providing
# an additional safeguard during manual installationsCustom solution using npm hooks:
// preinstall-cooldown.js
console.log('Preinstall hook for cooldown triggered');
const { execSync } = require('child_process');
const fs = require('fs');
const path = require('path');
// Configuration
const COOLDOWN_DAYS = 7;
const CACHE_FILE = path.join(process.cwd(), '.npm-cooldown-cache.json');
const CACHE_MAX_AGE_MS = 24 * 60 * 60 * 1000; // 24 hours
const CACHE_MAX_ENTRIES = 1000;
// In-memory cache for current run
const cache = new Map();
// Load persistent cache from disk
function loadPersistentCache() {
  try {
    if (fs.existsSync(CACHE_FILE)) {
      const cacheData = JSON.parse(fs.readFileSync(CACHE_FILE, 'utf8'));
      const now = Date.now();
      
      // Filter out expired entries and load valid ones
      let loadedCount = 0;
      for (const [key, entry] of Object.entries(cacheData)) {
        if (now - entry.timestamp < CACHE_MAX_AGE_MS) {
          cache.set(key, entry.result);
          loadedCount++;
        }
      }
      
      if (loadedCount > 0) {
        console.log(`Loaded ${loadedCount} cached results from ${CACHE_FILE}`);
      }
    }
  } catch (error) {
    console.warn(`Failed to load cache from ${CACHE_FILE}: ${error.message}`);
    // Continue without cache - not a fatal error
  }
}
// Save cache to disk
function savePersistentCache() {
  try {
    const cacheData = {};
    const now = Date.now();
    let savedCount = 0;
    
    // Convert Map to object format with timestamps
    // Limit to most recent CACHE_MAX_ENTRIES to prevent unbounded growth
    const entries = Array.from(cache.entries())
      .slice(-CACHE_MAX_ENTRIES);
    
    for (const [key, result] of entries) {
      cacheData[key] = {
        result: result,
        timestamp: now
      };
      savedCount++;
    }
    
    fs.writeFileSync(CACHE_FILE, JSON.stringify(cacheData, null, 2));
    console.log(`Saved ${savedCount} results to cache file`);
  } catch (error) {
    console.warn(`Failed to save cache to ${CACHE_FILE}: ${error.message}`);
    // Non-fatal - cache is optimization, not requirement
  }
}
function resolveVersion(name, spec, lock) {
  console.log(`Resolving version for ${name}@${spec}`);
  let resolvedVersion;
  if (lock) {
    const pkgKey = `node_modules/${name}`;
    resolvedVersion = lock.packages[pkgKey]?.version;
    console.log(`Lockfile version for ${name}: ${resolvedVersion || 'none'}`);
  }
  if (!resolvedVersion) {
    try {
      // Add timeout to prevent hanging
      resolvedVersion = execSync(`npm view ${name}@${spec || 'latest'} version`, {
        timeout: 10000  // 10 second timeout
      }).toString().trim();
      console.log(`Resolved version via npm view: ${resolvedVersion}`);
    } catch (error) {
      throw new Error(`Failed to resolve version for ${name}@${spec}: ${error.message}`);
    }
  }
  return resolvedVersion;
}
function checkPackageAge(name, version) {
  const cacheKey = `${name}@${version}`;
  
  // Check cache first
  if (cache.has(cacheKey)) {
    console.log(`Using cached result for ${cacheKey}`);
    return cache.get(cacheKey);
  }
  console.log(`Checking age for ${name}@${version}`);
  try {
    // Add timeout and retry logic
    let timeOutput;
    const maxRetries = 3;
    
    for (let attempt = 1; attempt <= maxRetries; attempt++) {
      try {
        timeOutput = execSync(`npm view ${name}@${version} time --json`, {
          timeout: 10000  // 10 second timeout
        }).toString().trim();
        break;
      } catch (error) {
        if (attempt === maxRetries) {
          console.warn(`WARNING: Failed to check age for ${name}@${version} after ${maxRetries} attempts`);
          console.warn(`Allowing installation - manual review recommended`);
          cache.set(cacheKey, true);
          return true;  // Fail open on network issues
        }
        console.log(`Retry ${attempt}/${maxRetries} for ${name}@${version}`);
        // Simple delay between retries
        execSync(`sleep 1`);
      }
    }
    let timeData;
    try {
      timeData = JSON.parse(timeOutput);
    } catch (parseError) {
      console.warn(`WARNING: Failed to parse time JSON for ${name}@${version}`);
      cache.set(cacheKey, true);
      return true;  // Fail open on parse errors
    }
    let publishDateStr = timeData[version];
    if (!publishDateStr) {
      publishDateStr = timeData.modified;
      console.log(`Version-specific date missing; using modified: ${publishDateStr}`);
    }
    if (!publishDateStr) {
      console.warn(`WARNING: No publish date available for ${name}@${version}`);
      cache.set(cacheKey, true);
      return true;  // Fail open if no date
    }
    const publishDate = new Date(publishDateStr);
    console.log(`Parsed publish date: ${publishDate}`);
    if (isNaN(publishDate.getTime())) {
      console.warn(`WARNING: Invalid publish date for ${name}@${version}: ${publishDateStr}`);
      cache.set(cacheKey, true);
      return true;  // Fail open on invalid date
    }
    const daysSincePublish = (Date.now() - publishDate.getTime()) / (1000 * 60 * 60 * 24);
    console.log(`Days since publish for ${name}@${version}: ${daysSincePublish.toFixed(1)}`);
    if (daysSincePublish < COOLDOWN_DAYS) {
      const result = false;
      cache.set(cacheKey, result);
      throw new Error(
        `Package ${name}@${version} was published ${daysSincePublish.toFixed(1)} days ago. ` +
        `Minimum age: ${COOLDOWN_DAYS} days. Rejecting installation.`
      );
    }
    console.log(`Package ${name}@${version} is ${daysSincePublish.toFixed(1)} days old. Approved.`);
    cache.set(cacheKey, true);
    return true;
  } catch (error) {
    // If we already cached the result, rethrow
    if (cache.has(cacheKey) && !cache.get(cacheKey)) {
      throw error;
    }
    // Otherwise log and fail open
    console.warn(`WARNING: Error checking ${name}@${version}: ${error.message}`);
    cache.set(cacheKey, true);
    return true;
  }
}
function main() {
  console.log('Reading package.json and lockfile');
  const pkgPath = path.join(process.cwd(), 'package.json');
  if (!fs.existsSync(pkgPath)) {
    console.error('package.json not found');
    process.exit(1);
  }
  const pkg = require(pkgPath);
  let lock = null;
  const lockPath = path.join(process.cwd(), 'package-lock.json');
  if (fs.existsSync(lockPath)) {
    lock = require(lockPath);
    console.log('package-lock.json found');
  } else {
    console.log('No package-lock.json found');
  }
  // Load persistent cache at start
  loadPersistentCache();
  // Track checked packages to avoid duplicates
  const checkedPackages = new Set();
  // Check transitive dependencies from package-lock.json
  if (lock && lock.packages) {
    Object.entries(lock.packages).forEach(([pkgKey, pkgData]) => {
      if (pkgKey && pkgData.version) {
        const name = pkgKey.replace(/^node_modules\//, '');
        const version = pkgData.version;
        const cacheKey = `${name}@${version}`;
        if (name && !checkedPackages.has(cacheKey)) {
          console.log(`Checking lockfile package: ${name}@${version}`);
          checkPackageAge(name, version);
          checkedPackages.add(cacheKey);
        }
      }
    });
  }
  // Check top-level dependencies from package.json
  const depTypes = ['dependencies', 'devDependencies', 'peerDependencies', 'optionalDependencies'];
  let hasDeps = false;
  depTypes.forEach(type => {
    const deps = pkg[type] || {};
    if (Object.keys(deps).length > 0) {
      hasDeps = true;
    }
    Object.entries(deps).forEach(([name, spec]) => {
      console.log(`Processing ${type}: ${name}@${spec}`);
      if (spec.startsWith('file:') || spec.startsWith('git:') || spec.startsWith('http:') || spec.startsWith('https:')) {
        console.log(`Skipping non-registry package: ${name}@${spec}`);
        return;
      }
      const resolvedVersion = resolveVersion(name, spec, lock);
      const cacheKey = `${name}@${resolvedVersion}`;
      if (!checkedPackages.has(cacheKey)) {
        checkPackageAge(name, resolvedVersion);
        checkedPackages.add(cacheKey);
      } else {
        console.log(`Skipping already checked package: ${name}@${resolvedVersion}`);
      }
    });
  });
  if (!hasDeps && (!lock || !lock.packages || Object.keys(lock.packages).length === 0)) {
    console.log('No dependencies to check. Proceeding.');
  } else {
    console.log('All packages checked. Proceeding with installation.');
  }
  
  // Save cache at end
  savePersistentCache();
}
try {
  console.log('Starting preinstall script');
  main();
} catch (error) {
  console.error(`Error: ${error.message}`);
  // Save cache even on failure (partial results are useful)
  try {
    savePersistentCache();
  } catch (saveError) {
    console.warn(`Failed to save cache on error: ${saveError.message}`);
  }
  
  // Clean up node_modules and package-lock.json on failure
  try {
    const nodeModulesPath = path.join(process.cwd(), 'node_modules');
    const lockPath = path.join(process.cwd(), 'package-lock.json');
    const altLockPath = path.join(process.cwd(), '.package-lock.json');
    
    if (fs.existsSync(nodeModulesPath)) {
      fs.rmSync(nodeModulesPath, { recursive: true, force: true });
      console.log('Cleaned up node_modules due to failure');
    }
    if (fs.existsSync(lockPath)) {
      fs.unlinkSync(lockPath);
      console.log('Cleaned up package-lock.json due to failure');
    }
    if (fs.existsSync(altLockPath)) {
      fs.unlinkSync(altLockPath);
      console.log('Cleaned up .package-lock.json due to failure');
    }
  } catch (cleanupError) {
    console.error(`Cleanup failed: ${cleanupError.message}`);
  }
  process.exit(1);
}Note for npm hook script:This example can be improved by using variable to handle cooldown value, manage a whitelist, or improve performance. Don't forget to whitelist this preinstall script itself when using --ignore-scripts. Also note that preinstall hooks are not executed when using npm install --package-lock-only; see the Lock Dependencies section for handling that scenario.
References:
Multi-Factor Authentication and Token Hygiene
Enable 2FA on npm:
# Enable 2FA for authentication and publishing
npm profile enable-2fa auth-and-writes
# Check 2FA status
npm profile get
# Publish with OTP (one-time password)
npm publish --otp=123456Token management (only if you cannot use OIDC):
# Prefer granular publish-only tokens with short expiry
npm token create --read-only    # for install-only CI
    
# Optional: restrict by source IPs
npm token create --cidr=10.0.0.0/8
npm token list
npm token revoke <token-id>
# Check token information
npm token list --jsonCI/CD best practices:
# GitHub Actions with Trusted Publishing (OIDC) - NO TOKEN NEEDED
name: Publish
on:
  release:
    types: [created]
jobs:
  publish:
    runs-on: ubuntu-latest
    permissions:
      id-token: write  # Required for OIDC authentication
      contents: read
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'  # Includes npm 10.8.2+ with OIDC support
          registry-url: 'https://registry.npmjs.org'
      - run: npm ci
      - run: npm publish
        # No --provenance flag needed - automatically included with trusted publishing
        # No NODE_AUTH_TOKEN needed - OIDC authentication happens automaticallySetup requirements for Trusted Publishing (one-time configuration):
Configure on npmjs.com:
- Navigate to your package settings at https://www.npmjs.com/package/<your-package>/access
- Click "Configure trusted publisher"
- Select "GitHub Actions"
- Enter:
Repository owner/organization:
your-org
Repository name:your-repo
Workflow filename:publish.yml(or your workflow file name)
Environment name: (optional, leave blank if not using GitHub environments)
Ensure npm CLI version:
- Use a recent npm (10.x or newer) for OIDC trusted publishing and provenance; npm 11+ recommended.
- Node.js 20 ships with npm 10.x; Node.js 22 ships with npm 11.x.
Benefits of Trusted Publishing:
- Zero secrets management - no tokens to create, rotate, or secure
- Automatic provenance generation - no --provenance flag needed
- Reduced attack surface - no long-lived credentials to steal
- Cryptographic proof of source and build environment
- Compliance with OpenSSF best practices
Fallback to tokens (if trusted publishing unavailable):
If you cannot use trusted publishing yet, use short-lived tokens:
# Legacy approach with short-lived tokens
name: Publish
on:
  release:
    types: [created]
jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          registry-url: 'https://registry.npmjs.org'
      - run: npm ci
      - run: npm publish --provenance
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}Token best practices (when tokens are required):
- Create granular access tokens (not classic tokens)
- Scope tokens to specific packages
- Set short expiration (eg ≤90 days)
- Rotate tokens regularly
- Never use tokens for public package publishing if trusted publishing is available
Monitoring account security:
# Check package signatures (npm 9+)
npm audit signatures
# Monitor for unusual activity (require `npm login` and appropriate permissions)
npm access ls-packages
npm access ls-collaborators <package-name>References:
Defense in Depth (Level 2)
Sandbox Installation
Container-based isolation:
# Dedicated install stage with minimal permissions
FROM node:20-alpine AS installer
# Create non-root user
RUN addgroup -g 1001 -S installer && \
    adduser -u 1001 -S installer -G installer
WORKDIR /install
# Copy dependency files only
COPY  package*.json ./
# Switch to non-root user
USER installer
# Install with restricted capabilities
RUN npm ci --ignore-scripts --omit=dev
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY  /install/node_modules ./node_modules
COPY  . .
USER node
CMD ["node", "server.js"]Docker with network isolation:
Option 1: Complete offline install (Recommended as no further infrastructure dependency)
# Step 1: Pre-populate cache with network access
docker run --rm \
  -v npm-cache:/root/.npm \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scripts
# Step 2: Install offline using cached packages (network completely disabled)
docker run --rm \
  --network=none \
  --memory=2g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v npm-cache:/root/.npm \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scripts --prefer-offline --cache /root/.npm
# Verification: Attempt network access (should fail)
docker run --rm --network=none alpine wget -O- https://google.com 2>&1 | grep -q "bad address" && echo "Network isolation verified"Option 2: Corporate proxy with domain filtering (Practical for enterprises)
# Requires: Corporate HTTP proxy configured with domain allowlist
# Example: proxy.company.intra:8080 allows only registry.npmjs.org
docker run --rm \
  --memory=2g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -e HTTP_PROXY=http://proxy.company.intra:8080 \
  -e HTTPS_PROXY=http://proxy.company.intra:8080 \
  -e NO_PROXY=localhost,127.0.0.1, internalArtifactRepo \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scripts
# The corporate proxy handles domain allowlisting
# Configure your proxy to only allow:
# - registry.npmjs.org (npm registry)
# - Objects.githubusercontent.com (GitHub releases, if needed)
# Verification: Test that proxy blocks unauthorized domains
docker run --rm \
  -e HTTP_PROXY=http://proxy.company.intra:8080 \
  -e HTTPS_PROXY=http://proxy.company.intra:8080 \
  alpine wget -O- https://malicious-site.com 2>&1 | grep -q "Forbidden\|403\|blocked" && echo "Proxy filtering verified"Option 3: Custom Docker network with egress controls (Advanced - requires infrastructure)
# PREREQUISITES: This option requires external infrastructure setup FIRST.
# 
# Docker's --network flag does NOT provide filtering by itself - it simply assigns 
# a network object to the container. The network object itself has no filtering capability.
# 
# You must then apply filtering to this Docker network by configuring one of:
# 1. Corporate HTTP proxy with domain allowlist (equivalent to Option 2 above)
# 2. Firewall rules on Docker host with egress filtering (iptables/nftables) - shown below
# 3. Docker network plugin with built-in egress controls (Calico, Cilium, etc.)
#
# This example shows HOW to use a restricted network once infrastructure exists,
# but is NOT a complete standalone solution.
# Step 1: Create custom network
docker network create restricted
# Step 2: Find the subnet Docker assigned to this network
RESTRICTED_SUBNET=$(docker network inspect restricted -f '{{range .IPAM.Config}}{{.Subnet}}{{end}}')
echo "Restricted network subnet: $RESTRICTED_SUBNET"
# Example output: 172.18.0.0/16
# Step 3: Configure firewall rules on Docker host (must run as root)
# WARNING: This is complex and requires ongoing maintenance.
# IP-based allowlists for CDNs are impractical. Better to use Option 2 (proxy) instead.
# Allow npm registry only (Cloudflare IPs used by npmjs.org)
sudo iptables -I DOCKER-USER -s $RESTRICTED_SUBNET -d 104.16.0.0/12 -p tcp --dport 443 -j ACCEPT
# Block all other HTTPS traffic from this network
sudo iptables -I DOCKER-USER -s $RESTRICTED_SUBNET -p tcp --dport 443 -j REJECT
# Block HTTP as well
sudo iptables -I DOCKER-USER -s $RESTRICTED_SUBNET -p tcp --dport 80 -j REJECT
# Verify rules are installed
sudo iptables -L DOCKER-USER -n -v | grep $RESTRICTED_SUBNET
# Step 4: Use the restricted network
docker run --rm \
  --network=restricted \
  --dns=10.0.0.1 \
  --memory=2g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scripts
# Verification: Test that restrictions work
echo "Testing restriction (should fail):"
docker run --rm --network=restricted alpine wget -T 5 -O- https://google.com 2>&1 | grep -q "wget: can't connect\|Connection refused\|Network unreachable" && echo "[OK] Network restrictions verified" || echo "[NOK] Restrictions NOT working - check firewall rules"
echo "Testing npm registry access (should succeed):"
docker run --rm --network=restricted alpine wget -T 5 -O- https://registry.npmjs.org/ 2>&1 | grep -q "<!DOCTYPE html>" && echo "[OK] npm registry accessible" || echo "[NOK] npm registry blocked - adjust firewall rules"
# Cleanup (when done testing):
# sudo iptables -D DOCKER-USER -s $RESTRICTED_SUBNET -d 104.16.0.0/12 -p tcp --dport 443 -j ACCEPT
# sudo iptables -D DOCKER-USER -s $RESTRICTED_SUBNET -p tcp --dport 443 -j REJECT
# sudo iptables -D DOCKER-USER -s $RESTRICTED_SUBNET -p tcp --dport 80 -j REJECT
# docker network rm restricted
# IMPORTANT LIMITATIONS:
# - CDN IP ranges change frequently (npm uses Cloudflare)
# - npm registry uses multiple CDN providers in some regions
# - Requires root access to Docker host (not possible in many environments)
# - iptables rules don't persist across reboots (need to save: iptables-save)
# - Rule order matters (DOCKER-USER chain must be before DOCKER-ISOLATION)
#
# RECOMMENDATION: Use Option 2 (corporate proxy) instead for production.
# Proxies allow domain-based filtering and are easier to maintain.Resource limits explanation:
- --memory=2g: Prevents memory exhaustion attacks
- --cpus=2: Prevents CPU hogging
- --pids-limit=100: Prevents fork bombs
- --security-opt=no-new-privileges: Prevents privilege escalation
npm and node_modules can be memory-intensive due to:
- Deep dependency trees (common: 500-1000 packages)
- Post-install scripts (native compilation)
- Parallel downloads and extraction
- Package deduplication algorithms
Linux namespace isolation with bubblewrap (local development only):
Important limitations:
- Requires USER_NS(user namespaces) capability - NOT available on:- GitHub Actions hosted runners
- Many Docker containers (unless --privileged)
- Some corporate Linux systems with restricted security policies
 
- Requires Linux kernel with namespace support (kernel 3.8+)
- Best suited for local development machines with full control
# Install bubblewrap (Debian/Ubuntu)
sudo apt-get install bubblewrap
# Verify user namespace support
if [ ! -w /proc/self/uid_map ]; then
    echo "ERROR: User namespaces not available"
    echo "This system cannot run bubblewrap sandboxing"
    exit 1
fi
# Run npm install in sandbox with network isolation
# Note: --unshare-net creates network namespace but doesn't guarantee isolation on all distros
bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --ro-bind /sbin /sbin \
  --tmpfs /tmp \
  --tmpfs /var \
  --bind "$(pwd)" "$(pwd)" \
  --unshare-all \
  --unshare-net \
  --die-with-parent \
  --new-session \
  --setenv HOME /tmp \
  -- \
  npm ci --ignore-scripts
# For true network isolation, combine with firewall rules or use --unshare-net explicitly
# Some distros allow network access even with --unshare-allFor CI/CD environments, use Docker-based isolation (Options 1-3 above) instead, as it works reliably across all CI platforms including GitHub Actions, GitLab CI, CircleCI, etc.
Alternative for restricted environments:
# Use Docker even on Linux for better CI/CD compatibility
docker run --rm \
  --network=none \
  --memory=2g \
  --cpus=2 \
  --pids-limit=100 \
  --cap-drop=ALL \
  --cap-add=CHOWN \
  --cap-add=SETUID \
  --cap-add=SETGID \
  --security-opt=no-new-privileges \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scripts --prefer-offlineCapability management:
# Drop all capabilities and add only what's needed
docker run --rm \
  --cap-drop=ALL \
  --cap-add=CHOWN \
  --cap-add=SETUID \
  --cap-add=SETGID \
  --memory=2g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/app \
  -w /app \
  node:20-alpine \
  npm ci --ignore-scriptsNote: Depending on your project, additional capabilities may be needed. Test in your environment and add capabilities incrementally as required.
References:
Minimize Dependencies
Audit and identify unused packages:
# Find unused dependencies
npx depcheck
# Analyze dependency tree
npm ls --all
npm ls --depth=0  # Direct dependencies only
# Find duplicate packages (be sure to use this in a project and don't run this as root...)
npm dedupe
# Analyze bundle size and identify large deps
npx webpack-bundle-analyzer dist/stats.jsonAnalyze transitive dependencies:
# See what a package will install before installing
npm view <package-name> dependencies peerDependencies
# Explain why a package is installed
npm explain <package-name>
# Find all instances of a package
npm ls <package-name>Security auditing:
# Run security audit
npm audit
# Audit only production dependencies
npm audit --omit=dev
# Fix vulnerabilities automatically
npm audit fix
# Generate detailed audit report
npm audit --json > audit-report.jsonnpm-audit - clarification note:
npm-audit only checks known vulnerabilities and doesn’t detect malicious code; it’s complementary to behavioral scanning.
Evaluate package quality before adding:
# Check package metadata
npm view <package-name>
# Prefer inspecting the published tarball for scripts
npm pack <package-name> --silent
tar -xzf <package-name>-<version>.tgz
jq .scripts package/package.json
# Weekly downloads via the public API
# Manual check could be done on npm-stat.com or similar
curl -s https://api.npmjs.org/downloads/point/last-week/<package-name> | jq .downloads
# Check package size
npm view <package-name> dist.unpackedSize
# Optional: lifecycle scripts reported in packument (may be absent or incomplete)
npm view <package-name> scriptsCorporate environment with private registry mirrors note: for air-gapped installs, note that npm view/npm audit must point to the mirror; set registry= and proxy env vars in CI.
References:
Pre-Installation Vetting
Automated security scanning:
# OpenSSF Scorecard
npx @ossf/scorecard <repository-url>
# Socket.dev CLI
npx socket-cli report create package.json
# Snyk test
npx snyk test
# Check for known malicious packages
npm auditGenerate SBOM:
# CycloneDX format
npx @cyclonedx/cyclonedx-npm --output-file sbom.json
# SPDX format
npx @spdx/spdx-sbom-generator npm .
# Fail the build if SBOM generation changes between commits
git diff --exit-code -- sbom.json || {
  echo "::error::SBOM drift detected; review dependency changes";
  exit 1;
}Manual inspection:
# View package details
npm view <package-name>
# Download and inspect package without installing
npm pack <package-name>
tar -xzf <package-name>-<version>.tgz
cd package && cat package.json
# Check package integrity
npm view <package-name> dist.integrity
npm view <package-name> dist.shasumVerify package signatures: npm audit signatures validates signatures only when the registry and package actually publish them. Treat as additive integrity, not an anti-malware control, and expect partial ecosystem coverage.
npm audit signaturesReferences:
Advanced Hardening (Level 3)
Continuous Monitoring
Runtime monitoring with Falco:
# falco_rules.local.yaml
- rule: Suspicious npm Network Activity
  desc: Detect npm/pnpm making outbound connections to public IPs during install
  condition: >
    proc.name in (npm, pnpm, node, npx) and
    evt.type=connect and
    evt.dir=> and
    fd.l4proto=tcp and
    not fd.rip in (
      "10.0.0.0/8",
      "172.16.0.0/12", 
      "192.168.0.0/16",
      "127.0.0.0/8"
    )
  output: >
    npm process connecting to public IP during package operation
    (command=%proc.cmdline dst=%fd.rip:%fd.rport sport=%fd.lport 
     user=%user.name container=%container.id)
  priority: WARNING
  tags: [network, npm, supply_chain]
- rule: npm Accessing Sensitive Files
  desc: Detect npm/node accessing SSH keys, cloud credentials, or tokens
  condition: >
    proc.name in (npm, pnpm, node, npx) and
    evt.type in (open, openat, openat2) and
    evt.is_open_write=false and
    (fd.name glob "/home/*/.ssh/*" or
     fd.name glob "/root/.ssh/*" or
     fd.name glob "/home/*/.aws/*" or
     fd.name glob "/root/.aws/*" or
     fd.name glob "/home/*/.config/gcloud/*" or
     fd.name glob "/root/.config/gcloud/*" or
     fd.name glob "/home/*/.kube/*" or
     fd.name glob "/root/.kube/*" or
     fd.name glob "/home/*/.docker/config.json" or
     fd.name glob "/root/.docker/config.json" or
     fd.name contains "id_rsa" or
     fd.name contains "id_ecdsa" or
     fd.name contains "id_ed25519")
  output: >
    npm process accessing sensitive credential files
    (file=%fd.name command=%proc.cmdline user=%user.name 
     parent=%proc.pname container=%container.id)
  priority: CRITICAL
  tags: [filesystem, npm, credentials, supply_chain]
- rule: npm Spawning Suspicious Processes
  desc: Detect npm spawning shells or network utilities during installation
  condition: >
    spawned_process and
    proc.pname in (npm, pnpm, node, npx) and
    proc.name in (bash, sh, zsh, curl, wget, nc, netcat, python, python3, perl, ruby)
  output: >
    npm spawned suspicious subprocess during package operation
    (process=%proc.name cmdline=%proc.cmdline parent=%proc.pname 
     user=%user.name container=%container.id)
  priority: WARNING
  tags: [process, npm, supply_chain]Runtime monitoring with Falco:
Falco network rules use IP addresses, not DNS names. For corporate environments with proxies, adjust rules to allow your specific proxy IPs.
Detection approaches:
- Public IP detection (recommended for isolated builds): Alert on any non-RFC1918 connections
- Allowlist approach (for environments with known registries): List allowed destination IPs
- Proxy-aware (for corporate environments): Permit connections through corporate proxies only
File system monitoring with osquery:
-- Monitor for new files created during npm install
SELECT * FROM file_events 
WHERE path LIKE '/home/%/.ssh/%' 
  AND action = 'CREATED'
  AND time > (SELECT unix_time FROM time) - 300;
-- Monitor for unexpected processes spawned
SELECT * FROM processes
WHERE parent IN (SELECT pid FROM processes WHERE name IN ('npm','pnpm','node'))
  AND name IN ('bash','sh','curl','wget','python');
    
-- Requires filesystem/process eventing enabled (eg --disable_events=false) and FIM tables configured for file_events.Dependency diff analysis:
# Compare package versions
npm diff <package-name>@1.0.0 <package-name>@1.0.1
# Detailed package comparison
npm pack <package-name>@1.0.0 && tar -xzf package-1.0.0.tgz -C /tmp/old
npm pack <package-name>@1.0.1 && tar -xzf package-1.0.1.tgz -C /tmp/new
diff -r /tmp/old /tmp/new
# Using diffoscope for detailed analysis
diffoscope \
  <(npm pack package@1.0.0 && tar -xf package-1.0.0.tgz) \
  <(npm pack package@1.0.1 && tar -xf package-1.0.1.tgz)Network monitoring:
# Monitor network connections during install
strace -e trace=connect npm install 2>&1 | grep connect
# Using tcpdump
sudo tcpdump -i any -n 'dst port 443 or dst port 80' &
npm install
sudo pkill tcpdump
    
# Note: strace requires Linux; tcpdump typically needs root or CAP_NET_RAW.Rollback procedures:
# Quick rollback using git
git show HEAD~1:package-lock.json > package-lock.json
npm ci
# Keep backup of lockfiles
cp package-lock.json package-lock.json.backup.$(date +%Y%m%d)References:
7.2. Python / pip Ecosystem
Implementation Guidance
Quick Wins (Level 1)
Disable Lifecycle Scripts
Challenge: Python's packaging system executes arbitrary code by design during installation. Unlike npm where scripts can be completely disabled, Python has NO reliable way to prevent code execution during package installation:
- setup.py (legacy): Runs arbitrary Python code during source installs
- PEP 517/518 build backends (modern): Still execute build backend code during wheel creation
- Wheel installation: Prefer prebuilt wheels from a trusted index and enforce --only-binary=:all:to avoid executing arbitrary build backends from sdists. This does not eliminate runtime risk (malicious code on import) but prevents build-time execution.
- Fundamental limitation: Code execution is architecturally integrated into Python's package installation process
Mitigation strategy: Since install-time code execution cannot be prevented in Python, defense must focus on other layers (sandboxing, pre-vetting, network isolation, and using pre-built wheels from trusted sources).
Wheel-only fail-closed install:
# Fail closed: wheels only, hashed, no dep resolver drift
PIP_ONLY_BINARY=:all: \
pip install --require-hashes --only-binary=:all: --no-deps -r requirements.txt
# Check if wheels are available before attempting
pip index versions <package-name>
# Best practice: Use wheels from trusted sources (PyPI, private mirror)Platform Compatibility: --only-binary=:all: will fail if:
- Wheels not available for your platform (ARM, Alpine Linux, Windows on ARM)
- Python version too new (some packages lag behind latest Python releases)
- Package only publishes sdist (source distribution)
- Uncommon architecture/OS combinations (e.g., Alpine on ARM64)
Pre-flight check:
# Test if all packages have wheels available before actual install
pip install --dry-run --only-binary=:all: -r requirements.txtIf this fails, you must either:
- Build in sandbox and accept source execution risk
- Find alternative package with wheel support
- Request wheel from maintainer (file GitHub issue)
- Build your own wheel in trusted environment and host privately
PEP 517/518 builds:
PEP 517/518 define how pip builds from source (sdists) using a declared backend in pyproject.toml. Building from source executes backend code during the build phase. Prefer prebuilt wheels to avoid build-time execution.
# pyproject.toml (example - this is in the package, not your project)
[build-system]
requires = ["setuptools>=40.8.0", "wheel"]
build-backend = "setuptools.build_meta"Key point: Even with PEP 517/518, building from source still executes code (the build backend). The only safe approach is to use prebuilt wheels:
# This prevents source builds but will fail if no wheel exists
pip install --only-binary=:all: -r requirements.txtBetter approach - use Poetry or pip-tools with hash verification:
# Poetry (Recommended)
poetry install --no-root
# pip-sync enforces EXACT environment (uninstalls extras)
pip-sync requirements.txt
# If you want to preserve other packages, use pip install instead:
pip install -r requirements.txt --require-hashes  # Generated with pip-compile --generate-hashesInspection workflow (when you must install from source):
# Download and inspect package before installing
pip download <package-name> --no-deps
tar -xzf <package-name>-<version>.tar.gz
# For packages with setup.py
cat <package>/setup.py  # Review for suspicious code
# For packages with pyproject.toml
cat <package>/pyproject.toml  # Check build backend
# Note: You'd still need to inspect the build backend code
# Check for suspicious patterns
grep -r "exec\|eval\|compile\|__import__" <package>/
grep -r "requests\|urllib\|http\|socket" <package>/
# Only install if inspection passes
pip install <package-name>Important limitation: Manual inspection is time-consuming and requires expertise. For production environments, prioritize:
- Use wheels-only from trusted sources (PyPI, corporate mirror)
- Sandbox all builds in containers with network isolation
- Vet packages before adding to requirements
- Monitor for unexpected behavior after installation
Note: Python ecosystem has weaker controls for install scripts compared to npm. Focus on other layers (sandboxing, pre-vetting) for protection. There is no Python equivalent to npm's --ignore-scripts that provides comprehensive protection.
References:
Lock Dependencies
Understanding Python lockfiles and transitive dependencies:
Unlike requirements.in which only lists your direct dependencies, Python lockfiles (when properly generated with pip-tools or Poetry) include ALL transitive dependencies with exact versions and cryptographic hashes.
# requirements.in - your direct dependencies only
django==4.2.7
requests==2.31.0
# requirements.txt (generated by pip-compile) - ALL dependencies with hashes
django==4.2.7 \
    --hash=sha256:8e0f1c2c2786b5c0e39fe1afce24c926040fad47c8ea8ad30aaf1188df29fc41
sqlparse==0.4.4 \              # ← Transitive dependency of django
    --hash=sha256:5430a4fe2ac7d0f93e66f1efc6e1338a41884b7ddf2a350cedd20ccc4d9d28f3
asgiref==3.7.2 \               # ← Another transitive dependency  
    --hash=sha256:89b2ef2247e3b562a16eef663bc0e2e703ec6468e2fa8a5cd61cd449786d4f6eWhy hashes matter: The --require-hashes flag verifies not just version numbers but the actual content of every package (including transitives), protecting against registry tampering and ensuring you get exactly what you tested.
Important: --require-hashes is all-or-nothing - if even ONE package in your entire dependency tree is missing a hash, the install will fail. This is why you must use pip-compile --generate-hashes rather than manually creating requirements.txt files.
Constraints when enforcing --require-hashes:
- Editable installs (-e .) and most VCS URLs won’t work. Build a wheel in CI (python -m build) and lock that wheel by filename with a hash.
- Consider omitting --allow-unsafein production to avoid toolchain drift from pinningpip/setuptools/wheelinside app envs. Manage tool versions separately.
Understanding --allow-unsafe:
The --allow-unsafe flag includes pip, setuptools, and wheel in your lockfile:
Use --allow-unsafe when:
- You want fully reproducible environments (same tool versions everywhere)
- Running in containers where toolchain is part of app environment
- Testing requires exact pip/setuptools versions
 Omit --allow-unsafe when:
- You want to keep pip/setuptools/wheel updated independently
- Your app doesn't depend on specific versions of these tools
- You manage toolchain separately (e.g., via Docker base image)
For most applications: Use --allow-unsafe for maximum reproducibility, but understand you'll need to update these packages separately for security patches.
Using requirements.txt with exact, hashed locks (recommended with pip-tools)
Do not generate lockfiles with pip freeze for production; it lacks hashes and captures incidental environment state.
Using pip-tools:
# Install pip-tools
pip install pip-tools
# Create requirements.in with your direct dependencies
cat > requirements.in << EOF
django==4.2.7
requests==2.31.0
celery==5.3.4
EOF
# Generate locked requirements.txt with hashes for ALL dependencies
# IMPORTANT: Use --allow-unsafe to include setuptools, pip, wheel
pip-compile --generate-hashes --allow-unsafe requirements.in
# Install from locked file with hash verification
pip-sync requirements.txt
# Or install with explicit hash verification enforcement
pip install -r requirements.txt --require-hashesCritical requirements for --require-hashes to work:
- Every package (direct and transitive) MUST have a hash
- All versions MUST be pinned with ==(not>=,~=, or ranges)
- Use pip-compile --generate-hashes to generate the file (manual creation is error-prone)
- Include --allow-unsafe flag to prevent install failures with setuptools/pip/wheel
- Cannot use with editable installs (-epackages)
- Cannot use with VCS/URL dependencies without additional configuration
What happens without hashes:
# This will FAIL if any package is missing a hash:
pip install -r requirements.txt --require-hashes
# Error: "In --require-hashes mode, all requirements must have their 
# versions pinned with ==. These do not: [package list]"Using Poetry (modern approach):
# Initialize project
poetry init
# Add dependencies (auto-locks in poetry.lock)
poetry add django==4.2.7
# Install from lockfile
poetry install
# Update all dependencies to latest compatible versions
poetry update
# Update specific package only
poetry update django
# Update without modifying existing locked versions (Poetry 2.0+)
poetry lock --no-updateNote on security updates: Poetry does not have a built-in capability o update only packages with known vulnerabilities.
# Option 1: Use pip-audit (run in Poetry environment)
poetry run pip-audit --format json > audit.json
# Extract vulnerable package names (verify jq path matches actual output)
jq -r '.vulnerabilities[].name' audit.json | sort -u | while read pkg; do
    echo "Updating vulnerable package: $pkg"
    poetry update "$pkg"
done
# Or simpler - let pip-audit handle it:
poetry run pip-audit --fix --skip-editable
# Or with dry-run to see what would be fixed
pip-audit --fix --dry-run
# Option 2: Use dependency update services (automated)
# - Dependabot (GitHub) - automatically creates PRs for security updates
# - Renovate - supports Poetry and security-only updates
# Option 3: Manual selective updates
# First identify vulnerable packages
pip-audit
# Then update specific package
poetry update requests  # Update only the requests package
    
**Using Pipenv**:
```bash
# Create Pipfile with locked versions
pipenv install django==4.2.7
# Install from Pipfile.lock
pipenv install --deploy  # Fails if Pipfile.lock is missing/outdated
# Verify integrity (Pipfile vs Pipfile.lock)
pipenv verifyIn requirements.txt, use exact versions:
# Good - exact versions
django==4.2.7
requests==2.31.0
celery==5.3.4
# Bad - floating versions
django>=4.2.0  # ✗ Allows any 4.2+ version
requests~=2.31  # ✗ Allows 2.31.x versions
celery  # ✗ Allows any versionCI/CD enforcement:
# GitHub Actions
name: Verify Dependencies
on: [pull_request]
jobs:
  verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      
      - name: Enforce hashed install from lock
        run: |
          python -m venv .venv && . .venv/bin/activate
          pip install --upgrade pip
          pip install -r requirements.txt --require-hashes --only-binary=:all: --no-deps
      
      - name: Check for floating versions
        run: |
          if grep -E '^[a-zA-Z0-9_-]+[>~!]=|^[a-zA-Z0-9_-]+.*\*' requirements.txt; then
            echo "::error::Floating version ranges detected in requirements.txt"
            echo "Only use exact versions with == operator"
            exit 1
          fi
      
      - name: Ensure lockfile is committed
        run: |
          # Check for pip-tools requirements.txt
          if [ -f requirements.in ]; then
            if ! git ls-files --error-unmatch requirements.txt > /dev/null 2>&1; then
              echo "::error::requirements.txt (lockfile) is not committed to git"
              exit 1
            fi
          fi
          # Check for Poetry lock
          if [ -f pyproject.toml ] && grep -q "tool.poetry" pyproject.toml; then
            if ! git ls-files --error-unmatch poetry.lock > /dev/null 2>&1; then
              echo "::error::poetry.lock is not committed to git"
              exit 1
            fi
          fiReferences:
Version Cooldown
Challenge: Python ecosystem lacks native cooldown support. Requires custom implementation or registry proxy.
Using private PyPI mirror (devpi):
# Install devpi
pip install devpi-server devpi-client
# Start devpi server
devpi-server --start
# Configure with cooldown filter (custom plugin needed)
# Create devpi_cooldown.py pluginCustom pre-install validation script:
#!/usr/bin/env python3
# scripts/check_package_age.py
import sys
import json
import urllib.request
import urllib.error
from datetime import datetime, timezone
def check_package_age(package_name, min_age_days=7, version=None):
    """
    Check if a package (or specific version) is older than min_age_days.
    Args:
        package_name (str): Name of the package to check.
        min_age_days (int): Minimum age in days for the package to be allowed.
        version (str, optional): Specific version to check. If None, checks latest version.
    Returns:
        bool: True if package is old enough or metadata is incomplete, False otherwise.
    Note: Relies on PyPI JSON API metadata. Some packages may have
    incomplete metadata or different structures.
    """
    # Track whether we're querying a specific version or latest
    query_specific_version = version is not None
    
    # Construct URL based on whether version is specified
    if query_specific_version:
        url = f"https://pypi.org/pypi/{package_name}/{version}/json"
    else:
        url = f"https://pypi.org/pypi/{package_name}/json"
    try:
        with urllib.request.urlopen(url, timeout=10) as response:
            data = json.loads(response.read())
        # Get version if not specified (use latest)
        if not query_specific_version:
            version = data['info']['version']
        
        # Get release files based on query type
        if query_specific_version:
            # Version-specific query returns 'urls' field
            release_files = data.get('urls', [])
        else:
            # General package query uses 'releases' dict
            if version not in data.get('releases', {}):
                print(f"WARNING: {package_name} version {version} not found in releases metadata")
                return True  # Allow installation if metadata incomplete
            release_files = data['releases'][version]
        # Check if we have release files
        if not release_files:
            print(f"WARNING: {package_name} {version} has no release files in metadata")
            return True  # Allow installation if no files listed
        # Try multiple possible fields for upload time
        upload_time_str = None
        for field in ['upload_time', 'upload_time_iso_8601', 'upload_timestamp']:
            upload_time_str = release_files[0].get(field)
            if upload_time_str:
                break
        
        # If still no upload time, try the package info level
        if not upload_time_str:
            upload_time_str = data.get('info', {}).get('release_date')
        
        if not upload_time_str:
            print(f"WARNING: {package_name} {version} missing upload_time in all known fields")
            return True  # Allow installation if timestamp unavailable
        # Parse date with timezone awareness
        # Handle both formats: "2023-01-15T10:30:00" and "2023-01-15T10:30:00Z"
        try:
            if upload_time_str.endswith('Z'):
                release_date = datetime.fromisoformat(upload_time_str.replace('Z', '+00:00'))
            else:
                # Try parsing with timezone info, fall back to assuming UTC
                try:
                    release_date = datetime.fromisoformat(upload_time_str)
                    if release_date.tzinfo is None:
                        release_date = release_date.replace(tzinfo=timezone.utc)
                except ValueError:
                    # Try alternate format
                    release_date = datetime.strptime(upload_time_str, '%Y-%m-%d %H:%M:%S')
                    release_date = release_date.replace(tzinfo=timezone.utc)
        except (ValueError, AttributeError) as e:
            print(f"WARNING: {package_name} {version} has unparseable date format: {upload_time_str}")
            return True  # Fail open on date parsing errors
        
        now = datetime.now(timezone.utc)
        days_old = (now - release_date).days
        if days_old < min_age_days:
            print(f"BLOCKED: {package_name} version {version} "
                  f"published {days_old} days ago. Minimum: {min_age_days} days.")
            return False
        print(f"ALLOWED: {package_name} {version} published {days_old} days ago")
        return True
    except urllib.error.HTTPError as e:
        if e.code == 404:
            print(f"ERROR: Package {package_name}{f' {version}' if version else ''} not found on PyPI")
        else:
            print(f"ERROR: HTTP {e.code} when checking {package_name}{f' {version}' if version else ''}")
        return False
    except urllib.error.URLError as e:
        print(f"ERROR: Network error checking {package_name}{f' {version}' if version else ''}: {e.reason}")
        # Fail open on network errors to avoid blocking legitimate installs
        print(f"WARNING: Allowing installation due to network error - manual review recommended")
        return True
    except (KeyError, IndexError, ValueError) as e:
        print(f"WARNING: Unexpected PyPI metadata structure for {package_name}{f' {version}' if version else ''}: {e}")
        print(f"Allowing installation due to metadata parsing failure")
        return True  # Fail open to avoid blocking legitimate packages
    except Exception as e:
        print(f"ERROR: Unexpected error checking {package_name}{f' {version}' if version else ''}: {e}")
        print(f"WARNING: Allowing installation - manual review recommended")
        return True  # Fail open on unexpected errors
if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: check_package_age.py <package_name> [version] [min_age_days]")
        print()
        print("Arguments:")
        print("  package_name     : Required. Name of the PyPI package")
        print("  version         : Optional. Specific version to check (e.g., '1.2.3')")
        print("  min_age_days    : Optional. Minimum age in days (default: 7)")
        print()
        print("Examples:")
        print("  check_package_age.py requests                    # Check latest version, 7 days")
        print("  check_package_age.py requests 2.31.0             # Check version 2.31.0, 7 days")
        print("  check_package_age.py requests 2.31.0 14          # Check version 2.31.0, 14 days")
        print("  check_package_age.py requests 14                 # Check latest version, 14 days")
        print()
        print("Returns exit code 0 if package is old enough, 1 otherwise.")
        sys.exit(1)
    package = sys.argv[1]
    
    # Parse arguments: handle both "pkg ver days" and "pkg days" formats
    version = None
    min_age = 7  # default
    
    if len(sys.argv) == 2:
        # Just package name: use latest, default min_age
        pass
    elif len(sys.argv) == 3:
        # Two arguments: could be "pkg ver" or "pkg days"
        arg2 = sys.argv[2]
        # If it looks like a version (contains dots or letters), treat as version
        if '.' in arg2 or any(c.isalpha() for c in arg2):
            version = arg2
        else:
            # Otherwise treat as min_age_days
            try:
                min_age = int(arg2)
            except ValueError:
                print(f"ERROR: Cannot parse '{arg2}' as version or days")
                sys.exit(1)
    elif len(sys.argv) >= 4:
        # Three or more arguments: pkg ver days
        version = sys.argv[2]
        try:
            min_age = int(sys.argv[3])
        except ValueError:
            print(f"ERROR: min_age_days must be an integer, got '{sys.argv[3]}'")
            sys.exit(1)
    if not check_package_age(package, min_age, version):
        sys.exit(1)Limitations and considerations:
PyPI metadata assumptions:
- Script relies on PyPI JSON API structure which may vary between packages
- Some older packages may have incomplete or missing upload_time metadata
- Private PyPI mirrors may have different API structures
- Network timeouts or PyPI outages will cause checks to fail open (allow installation with warning)
Fail-open vs fail-closed behavior:
- Current script fails open (allows installation) when metadata is incomplete or network issues occur
- This prevents blocking legitimate packages due to metadata quirks or temporary network problems
- All fail-open cases log clear warnings for manual review
- For stricter security, change return Truetoreturn Falsein warning cases (not recommended for most environments)
Performance:
- Checks run sequentially (one package at a time)
- Each check makes 1 HTTPS request to PyPI
- For large requirements.txt files (100+ packages), this may take 1-2 minutes
- Consider caching results or running in parallel for better performance
Alternative approaches:
- Use a private PyPI mirror (devpi) with cooldown filtering plugin
- Use registry proxy with time-based filtering
- Cache check results to avoid repeated API calls for same package@version
Integration with requirements:
# Pre-installation check with better error handling
while IFS='==' read -r name ver; do
  # Skip comments and empty lines
  [[ "$name" =~ ^[[:space:]]*# ]] && continue
  [[ -z "$name" ]] && continue
  [[ -z "$ver" ]] && continue
  
  # Run check
  python scripts/check_package_age.py "$name" "$ver" 7 || exit 1
done < <(grep -E '^[a-zA-Z0-9_.-]+==.*' requirements.txt | sed 's/ *#.*//' | tr '==' ' ')
# Then install
pip install -r requirements.txtRenovate configuration for Python:
// renovate.json
{
  "extends": ["config:base"],
  "dependencyDashboard": true,
  "packageRules": [
    {
      "description": "Python default cooldown and PR gating",
      "matchManagers": ["pip_requirements", "pip_setup", "poetry"],
      "stabilityDays": 7,
      // "prCreation": "status-success",
      "prCreation": "not-pending"
    },
    {
      "description": "Python security updates bypass cooldown",
      "matchManagers": ["pip_requirements", "pip_setup", "poetry"],
      "matchUpdateTypes": ["security"],
      "stabilityDays": 0
    }
  ]
}Notes:
- See npm section above for detailed explanation of stabilityDaysvsminimumReleaseAge. The same principles apply - usestabilityDaysto avoid deadlocking frequently-updated Python packages.
- Removing "matchManagers" on either rule makes it global across ecosystems. That’s fine if you want uniform behavior everywhere.
packageRules Note: This configuration keeps the ecosystem-scoped approach. If you want the same behavior for all ecosystems, remove the "matchManagers" line so the rule applies globally. This rule only overrides cooldown for updates Renovate classifies as "security".
References:
Multi-Factor Authentication and Token Hygiene
Enable 2FA on PyPI:
# Via PyPI web interface: https://pypi.org/manage/account/
# Enable 2FA in Account Settings
# Generate API token (scoped)
# Navigate to: Account Settings > API tokens > Add API token
# Configure pip to use token
cat > ~/.pypirc << EOF
[pypi]
username = __token__
password = pypi-AgEIcHlwaS5vcmc...
EOF
# Secure the file
chmod 600 ~/.pypircToken management best practices:
# Create scoped tokens for different purposes
# - Upload-only token for CI/CD
# - Read-only token for private package index
# Store tokens as environment variables
export TWINE_USERNAME=__token__
export TWINE_PASSWORD=pypi-AgEIcHlwaS5vcmc...
# Use keyring for secure storage (recommended)
pip install keyring
keyring set https://upload.pypi.org/legacy/ __token__CI/CD with GitHub Actions:
# .github/workflows/publish.yml
name: Publish to PyPI
on:
  release:
    types: [created]
jobs:
  publish:
    runs-on: ubuntu-latest
    environment: release
    permissions:
      id-token: write  # For trusted publishing
      contents: read
    
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      - name: Build package
        run: |
          pip install build
          python -m build
      - name: Publish to PyPI (trusted publishing)
        uses: pypa/gh-action-pypi-publish@release/v1
        # No credentials needed with trusted publishing!Trusted Publishers (one-time on PyPI):
PyPI supports OIDC-based trusted publishing from GitHub Actions, removing need for long-lived tokens.
- Go to https://pypi.org/manage/project/YOUR-PROJECT/settings/publishing/
- Add publisher:- Choose "GitHub"
- Repository owner/org: your-org
- Repository name: your-repo
- Workflow name: publish.yml
- Environment name: release(optional but recommended)
 
This links your PyPI project to your GitHub repository via OIDC.
References:
Defense in Depth (Level 2)
Sandbox Installation
Container-based isolation:
# Dockerfile for isolated Python installation
FROM python:3.11-slim AS installer
# Create non-root user
RUN useradd -m -u 1001 installer
WORKDIR /install
# Copy requirements only
COPY  requirements.txt ./
# Switch to non-root user
USER installer
# Install dependencies
RUN pip install --user --no-cache-dir -r requirements.txt
# Production stage
FROM python:3.11-slim
WORKDIR /app
COPY  /home/installer/.local /home/app/.local
COPY . .
RUN useradd -m app && chown -R app:app /app
USER app
ENV PATH=/home/app/.local/bin:$PATH
CMD ["python", "app.py"]Docker with network isolation:
# Step 1: Pre-populate cache with network access
docker run --rm \
  -v pip-cache:/root/.cache/pip \
  -v $(pwd):/app \
  -w /app \
  python:3.11-slim \
  pip download -r requirements.txt --dest /root/.cache/pip
# Step 2: Install without network using cached wheels
docker run --rm \
  --network=none \
  --memory=4g \
  --cpus=4 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v pip-cache:/root/.cache/pip \
  -v $(pwd):/app \
  -w /app \
  python:3.11-slim \
  pip install -r requirements.txt --no-index --find-links /root/.cache/pipUsing bubblewrap (local development only):
⚠️ Important limitations:
- Requires user namespace capabilities - NOT available on GitHub Actions, most Docker containers
- Only works on Linux systems with namespace support enabled
- Not suitable for CI/CD pipelines - use Docker-based isolation instead
# Install bubblewrap (Debian/Ubuntu)
sudo apt-get install bubblewrap
# Verify support before attempting
if [ ! -w /proc/self/uid_map ]; then
    echo "ERROR: User namespaces not enabled on this system"
    echo "Use Docker-based isolation instead (see above)"
    exit 1
fi
# Run pip in sandbox
bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --tmpfs /tmp \
  --tmpfs /home \
  --bind "$(pwd)" "$(pwd)" \
  --unshare-all \
  --die-with-parent \
  --setenv HOME /tmp \
  --setenv PYTHONUSERBASE /tmp/python \
  pip install -r requirements.txt --userRecommended for CI/CD and most production environments:
Use Docker-based isolation which works universally:
# Works on all CI/CD platforms including GitHub Actions
docker run --rm \
  --network=none \
  --memory=4g \
  --cpus=4 \
  --pids-limit=100 \
  --cap-drop=ALL \
  --security-opt=no-new-privileges \
  -v $(pwd):/app \
  -w /app \
  python:3.11-slim \
  sh -c "pip install -r requirements.txt --user"Resource limit explanations:
- --memory: Prevents memory exhaustion attacks; pip's wheel builds can use 2-3GB per package
- --cpus: Prevents CPU hogging; compilation of C extensions can max out cores
- --pids-limit: Prevents fork bombs; 100 is sufficient for typical pip operations
- --security-opt=no-new-privileges: Prevents privilege escalation
Python-specific considerations:
- setuptools/wheel compilation: Can spike memory to 2-3GB per package with C extensions
- Large package downloads: torch (~800MB), tensorflow (~500MB) need adequate memory
- Parallel builds: pip builds wheels in parallel by default; reduce with --no-build-isolationif hitting limits
- Virtual environments: Add ~100-200MB overhead for venv/virtualenv
Monitoring and adjustment:
# Monitor during installation
docker stats
# Signs you need more resources:
# - MemoryError or "Killed" messages
# - Pip hangs during compilation
# - Installation takes >2x expected time
# Adjust upward by 50% if consistently hitting limitsVirtual environment isolation (basic):
# Create isolated venv
python -m venv --clear venv
# Activate and install
source venv/bin/activate
pip install -r requirements.txt
# Deactivate
deactivateReferences:
Minimize Dependencies
Audit unused packages:
# Find unused imports (requires pipreqs)
pip install pipreqs
pipreqs --print ./src
# Compare with requirements.txt
comm -3 <(pipreqs --print ./src | sort) <(cat requirements.txt | sort)
# Remove unused packages
pip uninstall <package-name>Analyze dependency tree:
# Install pipdeptree
pip install pipdeptree
# View dependency tree
pipdeptree
# Show only top-level packages
pipdeptree --packages <package-name>
# Find circular dependencies
pipdeptree --warn fail
# Show reverse dependencies
pipdeptree --reverse --packages <package-name>Security auditing:
Python has multiple vulnerability scanners with different trade-offs:
pip-audit (recommended for most users - free/open source):
pip install pip-audit
pip-audit                    # Scan current environment
pip-audit -r requirements.txt # Scan requirements file
pip-audit --fix              # Auto-fix vulnerabilities- Fully free and open source
- Free for commercial use
- Auto-fix capability built-in
Snyk (commercial with good free tier):
snyk test --file=requirements.txt- Free tier available for open source
- Excellent IDE integration
- Multi-ecosystem support (not just Python)
**pip-audit - clarification note**:
pip-audit only check known vulnerabilities and don't detect malicious code, which is complementary to behavioral scanning.
    
**Evaluate package quality**:
```python
# scripts/check_package_info.py
import json
import urllib.request
def get_package_info(package_name):
    url = f"https://pypi.org/pypi/{package_name}/json"
    with urllib.request.urlopen(url) as response:
        data = json.loads(response.read())
    
    info = data['info']
    print(f"Package: {info['name']}")
    print(f"Version: {info['version']}")
    print(f"Author: {info['author']}")
    print(f"License: {info['license']}")
    print(f"Home page: {info['home_page']}")
    print(f"Last release: {data['urls'][0]['upload_time']}")
    print(f"Downloads (approx): Check pypistats.org")
    
    # Check for setup.py or build requirements
    if 'requires_dist' in info:
        print(f"Dependencies: {len(info['requires_dist'])}")
get_package_info('requests')References:
Pre-Installation Vetting
Automated security scanning:
# OpenSSF Scorecard
scorecard --repo=github.com/<org>/<repo>
# Bandit (Python-specific SAST)
pip install bandit
# For JSON output
bandit -r <package-source-dir> -f json -o bandit-report.json
# For readable output with severity filtering
bandit -r <package-source-dir> -lll  # Only show HIGH confidence
bandit -r <package-source-dir> -ll   # Show MEDIUM and HIGH confidence
bandit -r <package-source-dir> -i    # Show only MEDIUM and HIGH severity issues
# pip-audit
pip-audit -r requirements.txt
# Snyk
snyk test --file=requirements.txtGenerate SBOM:
# Using CycloneDX
pip install cyclonedx-bom
cyclonedx-py -r requirements.txt -o sbom.json
# Using SPDX
pip install spdx-tools
# (requires manual configuration)Manual inspection:
# Download without installing
pip download <package-name> --no-deps
# Extract and inspect
tar -xzf <package-name>-<version>.tar.gz
cd <package-name>-<version>
# Check setup.py for suspicious code
cat setup.py
# Check for binary files or obfuscated code
find . -type f -name "*.so"
find . -type f -exec file {} \; | grep -i "executable"Verify package integrity:
# Verify package integrity
# Option 1: Use --require-hashes during install (recommended)
pip install -r requirements.txt --require-hashes
# Option 2: Manually compute hash to compare with PyPI
sha256sum <package-name>-<version>.whl
# Option 3: Use pip-audit to verify integrity
pip-audit --require-hashes -r requirements.txtReferences:
Advanced Hardening (Level 3)
Continuous Monitoring
Runtime monitoring with Falco:
- rule: Suspicious pip Network Activity
  desc: Detect pip/python making outbound connections to public IPs during install
  condition: >
    proc.name in (pip, pip3, python, python3, setup.py) and
    evt.type=connect and
    evt.dir=> and
    fd.l4proto=tcp and
    not fd.rip in (
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16", 
      "127.0.0.0/8"
    )
  output: >
    pip process connecting to public IP during package installation
    (command=%proc.cmdline dst=%fd.rip:%fd.rport sport=%fd.lport
     user=%user.name container=%container.id)
  priority: CRITICAL
  tags: [network, python, pip, supply_chain]
- rule: pip Accessing Sensitive Files
  desc: Detect pip/python accessing SSH keys, cloud credentials, or tokens
  condition: >
    proc.name in (pip, pip3, python, python3, setup.py) and
    evt.type in (open, openat, openat2) and
    evt.is_open_write=false and
    (fd.name glob "/home/*/.ssh/*" or
     fd.name glob "/root/.ssh/*" or
     fd.name glob "/home/*/.aws/*" or
     fd.name glob "/root/.aws/*" or
     fd.name glob "/home/*/.config/gcloud/*" or
     fd.name glob "/root/.config/gcloud/*" or
     fd.name glob "/home/*/.kube/*" or
     fd.name glob "/root/.kube/*" or
     fd.name glob "/home/*/.docker/config.json" or
     fd.name glob "/root/.docker/config.json" or
     fd.name contains "id_rsa" or
     fd.name contains "id_ecdsa" or
     fd.name contains "id_ed25519")
  output: >
    pip process accessing sensitive credential files
    (file=%fd.name command=%proc.cmdline user=%user.name
     parent=%proc.pname container=%container.id)
  priority: CRITICAL
  tags: [filesystem, python, pip, credentials, supply_chain]
- rule: pip Spawning Suspicious Processes
  desc: Detect pip/setup.py spawning shells or network utilities during installation
  condition: >
    spawned_process and
    proc.pname in (pip, pip3, python, python3, setup.py) and
    proc.name in (bash, sh, zsh, curl, wget, nc, netcat, perl, ruby, gcc, cc, make)
  output: >
    pip spawned suspicious subprocess during package installation
    (process=%proc.name cmdline=%proc.cmdline parent=%proc.pname
     user=%user.name container=%container.id)
  priority: WARNING
  tags: [process, python, pip, supply_chain]
# =============================================================================
# Optional: Corporate Environment Adjustments
# =============================================================================
# If using corporate proxies, modify network rules to allow proxy IPs:
#
# - macro: corporate_proxy
#   condition: fd.rip in ("10.0.1.100", "10.0.1.101")
#
# Then update network rules:
#   condition: >
#     ... and
#     not corporate_proxy
#
# =============================================================================
# Notes for Production Use
# =============================================================================
# 1. Test rules in audit mode first (priority: NOTICE instead of WARNING/CRITICAL)
# 2. Tune file paths based on your actual user home directories
# 3. Add exceptions for legitimate build tools if needed
# 4. For CI/CD: Add container.id or k8s pod filters to reduce noise
# 5. Consider using fd.rip instead of fd.sip for destination IPsRuntime monitoring with Falco:
Falco network rules use IP addresses, not DNS names. For corporate environments with proxies, adjust rules to allow your specific proxy IPs.
Detection approaches:
- Public IP detection (recommended for isolated builds): Alert on any non-RFC1918 connections
- Allowlist approach (for environments with known registries): List allowed destination IPs
- Proxy-aware (for corporate environments): Permit connections through corporate proxies only
Monitor setup.py execution:
# Monitor pip install process
strace -f -e trace=open,openat,connect \
  pip install <package-name> 2>&1 | grep -E '\.ssh|\.aws|\.config'
# Monitor network during install
sudo tcpdump -i any -n 'host not pypi.org and host not files.pythonhosted.org' &
pip install <package-name>
sudo pkill tcpdumpDependency diff:
# Compare requirements files
diff requirements.txt.old requirements.txt
# Using pip-diff (custom script)
pip freeze > current.txt
# After changes
pip freeze > new.txt
diff current.txt new.txtRollback procedures:
# Rollback using git
git show HEAD~1:requirements.txt > requirements.txt
pip install -r requirements.txt --force-reinstall
# Keep backups
cp requirements.txt requirements.txt.backup.$(date +%Y%m%d)References:
7.3. Java / Maven & Gradle Ecosystem
Implementation Guidance
Quick Wins (Level 1)
Disable Lifecycle Scripts
Maven - Restrict dangerous plugins:
<!-- pom.xml -->
<build>
  <pluginManagement>
    <plugins>
      <!-- Enforce explicit plugin versions everywhere -->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-enforcer-plugin</artifactId>
        <version>3.4.1</version>
        <executions>
          <execution>
            <id>enforce-plugin-versions</id>
            <goals><goal>enforce</goal></goals>
            <configuration>
              <rules>
                <requirePluginVersions>
                  <banLatest>true</banLatest>
                  <banRelease>true</banRelease>
                </requirePluginVersions>
              </rules>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <!-- Corporate: define only approved plugins here (whitelist). 
           Omit exec-maven-plugin and maven-antrun-plugin to discourage use. -->
    </plugins>
  </pluginManagement>
</build>Warning: Banning exec-maven-plugin may break builds that legitimately need to execute code (e.g., code generation, native compilation). Maintain an allowlist of approved use cases.
Gradle - Disable exec tasks:
// build.gradle
import org.gradle.api.tasks.Exec
import org.gradle.api.tasks.JavaExec
def allowed = ['buildNative', 'compileProto'] as Set
tasks.withType(Exec).configureEach {
    enabled = name in allowed
}
tasks.withType(JavaExec).configureEach {
    enabled = name in allowed
}Gradle Kotlin DSL:
// build.gradle.kts
import org.gradle.api.tasks.Exec
import org.gradle.api.tasks.JavaExec
val allowed = setOf("buildNative", "compileProto")
tasks.withType<Exec>().configureEach {
    enabled = name in allowed
}
tasks.withType<JavaExec>().configureEach {
    enabled = name in allowed
}Note: Scope this to CI ‘verify’ jobs first; it will break builds that legitimately spawn tools.
Maven settings.xml - Block repositories with executable plugins:
<!-- ~/.m2/settings.xml -->
<settings>
  <mirrors>
    <mirror>
      <id>corporate-maven</id>
      <mirrorOf>*</mirrorOf>
      <url>https://maven.company.com/repository</url>
    </mirror>
  </mirrors>
  <!-- Tip: control dangerous plugins via repository manager policy 
       (block/strip artifacts) and require plugin versions via Enforcer. -->
</settings>References:
Lock Dependencies
Maven - Control direct AND transitive dependencies:
Maven's transitive dependency resolution can introduce vulnerable versions even when your direct dependencies are pinned. Use dependencyManagement to control transitive versions.
<!-- pom.xml -->
<dependencies>
  <!-- Direct dependencies with exact versions -->
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.1.5</version>
  </dependency>
  
  <dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.15.3</version>
  </dependency>
  
  <!-- Bad - version ranges (avoid these) -->
  <!-- <version>[1.0,2.0)</version> -->
  <!-- <version>LATEST</version> -->
  <!-- <version>RELEASE</version> -->
</dependencies>
<!-- Lock transitive dependency versions -->
<dependencyManagement>
  <dependencies>
    <!-- Override transitive versions explicitly -->
    <!-- Example: spring-boot-starter-web pulls in jackson-core -->
    <dependency>
      <groupId>com.fasterxml.jackson.core</groupId>
      <artifactId>jackson-core</artifactId>
      <version>2.15.3</version>
    </dependency>
    
    <!-- Example: Control commons-codec version across all dependencies -->
    <dependency>
      <groupId>commons-codec</groupId>
      <artifactId>commons-codec</artifactId>
      <version>1.16.0</version>
    </dependency>
    
    <!-- Example: Override vulnerable transitive dependency -->
    <dependency>
      <groupId>org.yaml</groupId>
      <artifactId>snakeyaml</artifactId>
      <version>2.0</version>
    </dependency>
  </dependencies>
</dependencyManagement>How this works:
- <dependencies>section: Your direct dependencies with exact versions
- <dependencyManagement>section: Overrides versions for transitive dependencies
- Maven will use the version specified in dependencyManagementeven if a direct dependency requests a different version
Identifying which transitives to lock:
# View full dependency tree including transitives
mvn dependency:tree
# Find which version of a transitive is actually resolved
mvn dependency:tree -Dverbose -Dincludes=commons-codec:commons-codec
# Example output shows transitive path:
# [INFO] com.example:myapp:jar:1.0.0
# [INFO] \- org.springframework.boot:spring-boot-starter-web:jar:3.1.5
# [INFO]    \- commons-codec:commons-codec:jar:1.15 (version from transitive)Maven - Dependency snapshot for verification (Maven 3.8+):
Critical Limitation: Maven does NOT have an enforced lockfile mechanism
like npm/pip/go.
The dependency:list command creates a snapshot for MANUAL COMPARISON ONLY.
Maven WILL NOT fail builds if versions change.
This means:
- Two developers can build "same" code with different dependency versions
- Production builds may differ from testing
- No guarantee of reproducible builds
Workarounds (choose at least 2):
- Switch to Gradle which has real locking
- Use corporate repository manager to freeze versions
- Implement CI check that fails on dependency:tree diff
- Use exact versions + Maven Enforcer + dependencyManagement
The following creates a text snapshot for manual verification:
# Generate dependency snapshot (for audit/comparison purposes)
mvn dependency:list -DoutputFile=dependencies.lock
# Verify dependencies haven't changed (manual check)
mvn dependency:list -DoutputFile=dependencies-check.lock
diff dependencies.lock dependencies-check.lock
# This is a verification tool, not an enforcement mechanism
# Actual version locking happens via dependencyManagement in pom.xmlLimitation: This snapshot is not automatically enforced by Maven. Teams must rely on dependencyManagement for actual version control and use this snapshot for audit trails and change detection.
CI enforcement example (implementing workaround #3):
# .github/workflows/verify-dependencies.yml
name: Verify Dependency Tree
on: [pull_request]
jobs:
  check-dependencies:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Need history to compare
      
      - uses: actions/setup-java@v4
        with:
          distribution: 'temurin'
          java-version: '17'
      
      - name: Generate dependency tree for current branch
        run: |
          mvn dependency:tree -DoutputFile=dependency-tree-current.txt -DoutputType=text
      
      - name: Generate dependency tree for base branch
        run: |
          git checkout ${{ github.base_ref }}
          mvn dependency:tree -DoutputFile=dependency-tree-base.txt -DoutputType=text
          git checkout -
      
      - name: Compare dependency trees
        run: |
          if ! diff -u dependency-tree-base.txt dependency-tree-current.txt > dependency-diff.txt; then
            echo "::error::Dependency tree has changed!"
            echo ""
            echo "Dependency changes detected:"
            cat dependency-diff.txt
            echo ""
            echo "If these changes are intentional, document them in the PR description."
            echo "Otherwise, check for version drift or unexpected dependency resolution changes."
            exit 1
          fi
          echo "✓ Dependency tree unchanged"
      
      - name: Upload diff on failure
        if: failure()
        uses: actions/upload-artifact@v4
        with:
          name: dependency-diff
          path: dependency-diff.txtThis CI check will fail if dependency resolution changes between base branch and PR,
catching version drift that Maven's lack of lockfile would otherwise allow silently.
Gradle - Use dependency locking:
// build.gradle
dependencyLocking {
    lockAllConfigurations()
}
// Generate lock files
// Run: ./gradlew dependencies --write-locks
// Commit gradle.lockfile to version controlGradle Kotlin DSL:
// build.gradle.kts
dependencyLocking {
    lockAllConfigurations()
}
// Strict mode - fail on missing locks
configurations.all {
    resolutionStrategy.activateDependencyLocking()
}Verify in CI/CD:
# GitHub Actions
name: Verify Dependencies
on: [pull_request]
jobs:
  verify-maven:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-java@v4
        with:
          distribution: 'temurin'
          java-version: '17'
      - name: Verify no SNAPSHOT or dynamic versions
        shell: bash
        run: |
          set -euo pipefail
          
          echo "==> Checking for dynamic/SNAPSHOT versions in POMs..."
          
          # Phase 1: Quick check of POM XML directly (fast-fail for obvious issues)
          mapfile -t POMS < <(git ls-files '**/pom.xml')
          bad=false
          
          for pom in "${POMS[@]}"; do
            # Remove XML comments to avoid false positives
            cleaned="$(sed -e 's/<!--[^-]*-[^-]*->//g' "$pom")"
            
            if grep -Eq '<version>.*(LATEST|RELEASE).*</version>' <<< "$cleaned" \
              || grep -Eq '<version>\s*[\(\[].*[\)\]]\s*</version>' <<< "$cleaned" \
              || grep -Eq '<version>.*SNAPSHOT.*</version>' <<< "$cleaned"
            then
              echo "::error file=$pom::Dynamic or SNAPSHOT versions detected in POM XML"
              bad=true
            fi
          done
          
          if [[ "$bad" == true ]]; then
            echo "::error::Direct SNAPSHOT/dynamic versions found in POM files"
            echo "Use exact, non-SNAPSHOT versions only."
            exit 1
          fi
          
          echo "✓ Direct POM check passed"
          
          # Phase 2: Check effective POM (catches property-based versions)
          echo "==> Generating effective POM to check resolved versions..."
          
          # Generate effective POM (resolves all properties)
          if ! mvn help:effective-pom -Doutput=target/effective-pom.xml -B -q; then
            echo "::warning::Could not generate effective POM for deep validation"
            echo "Proceeding with direct POM validation only"
            exit 0
          fi
          
          # Check effective POM for SNAPSHOTs
          if grep -Eq '<version>.*SNAPSHOT.*</version>' target/effective-pom.xml; then
            echo "::error::SNAPSHOT versions detected in effective POM (may be from properties)"
            echo ""
            echo "SNAPSHOT versions found:"
            grep -E '<version>.*SNAPSHOT.*</version>' target/effective-pom.xml | head -10
            echo ""
            echo "This indicates SNAPSHOT versions defined via properties or parent POMs."
            echo "Run locally: mvn help:effective-pom | grep SNAPSHOT"
            exit 1
          fi
          
          # Check for dynamic version ranges in effective POM
          if grep -Eq '<version>\s*[\(\[].*[\)\]]\s*</version>' target/effective-pom.xml; then
            echo "::error::Dynamic version ranges detected in effective POM"
            echo ""
            echo "Dynamic ranges found:"
            grep -E '<version>\s*[\(\[].*[\)\]]\s*</version>' target/effective-pom.xml | head -10
            echo ""
            echo "Use exact versions instead of ranges like [1.0,2.0) or (,1.0]"
            exit 1
          fi
          
          # Check for LATEST/RELEASE in effective POM
          if grep -Eq '<version>.*(LATEST|RELEASE).*</version>' target/effective-pom.xml; then
            echo "::error::LATEST or RELEASE keywords detected in effective POM"
            echo ""
            echo "Dynamic keywords found:"
            grep -E '<version>.*(LATEST|RELEASE).*</version>' target/effective-pom.xml | head -10
            echo ""
            echo "Use exact version numbers instead of LATEST or RELEASE"
            exit 1
          fi
          
          echo "✓ Effective POM check passed"
          echo "✓ All dependency versions are pinned correctly"
      - name: Forbid risky Maven plugins via POM scan
        shell: bash
        run: |
          set -euo pipefail
          mapfile -t POMS < <(git ls-files '**/pom.xml')
          banned='org.codehaus.mojo:exec-maven-plugin|org.apache.maven.plugins:maven-antrun-plugin'
          hit=false
          for pom in "${POMS[@]}"; do
            cleaned="$(sed -e 's/<!--[^-]*-[^-]*->//g' "$pom")"
            if grep -Eq "$banned" <<< "$cleaned"; then
              echo "::error file=$pom::Forbidden Maven plugin referenced (exec-maven-plugin or maven-antrun-plugin)"
              hit=true
            fi
          done
          [[ "$hit" == true ]] && exit 1
      - name: Verify dependencies (structure/conflicts)
        run: mvn -B dependency:tree -Dverbose -DfailOnWarning
  verify-gradle:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-java@v4
        with:
          distribution: 'temurin'
          java-version: '17'
      - name: Enforce checksum verification against committed metadata
        run: ./gradlew --verify-metadata resolveDependencies
      - name: Ensure verification metadata committed
        run: |
          if ! git ls-files --error-unmatch gradle/verification-metadata.xml >/dev/null 2>&1; then
            echo "::error::gradle/verification-metadata.xml is not committed"
            exit 1
          fi
      - name: Check if verification metadata is current
        run: |
          ./gradlew --write-verification-metadata sha256 dependencies
          if ! git diff --exit-code gradle/verification-metadata.xml; then
            echo "::error::gradle/verification-metadata.xml is out of date"
            echo "Run './gradlew --write-verification-metadata sha256 dependencies' locally and commit changes"
            exit 1
          fi
      - name: Ensure Gradle lockfile is committed
        run: |
          if ! git ls-files --error-unmatch gradle.lockfile >/dev/null 2>&1; then
            echo "::error::gradle.lockfile is not committed to git"
            echo "Run './gradlew dependencies --write-locks' and commit the lockfile"
            exit 1
          fiMaven version validation note: This check uses two-phase validation:
- Direct POM check: Fast regex scan of POM XML (catches 90% of issues)
- Effective POM check: Resolves all properties and inheritance (catches hidden SNAPSHOTs)
 The effective POM check is more thorough but requires Maven execution. If Maven is not available or effective-pom generation fails, the workflow continues with only direct validation.
For multi-module projects: This check runs on all pom.xml files in the repository. Ensure your build doesn't rely on parent POMs outside the repository, or add them to the workspace first.
Gradle locking vs verification
- Locking: dependencyLocking { lockAllConfigurations() }and./gradlew --write-locksproducegradle.lockfilethat pins versions.
- Verification: ./gradlew --write-verification-metadata sha256 helpcreatesgradle/verification-metadata.xmlto pin checksums.
- CI verify jobs should run normal tasks without --update-locks; resolution must match the committed lockfile or fail.
Gradle verification quick-start
- Generate verification metadata once (review and commit it):
./gradlew --write-verification-metadata sha256 help- Enforce verification and fail on drift:
./gradlew --verify-metadata resolveDependencies- When dependencies legitimately change (e.g., version bumps), regenerate and review the diff:
./gradlew --write-verification-metadata sha256 dependencies
git diff gradle/verification-metadata.xmlMaven equivalents
- Pin versions in <dependencyManagement>; avoid ranges ([),LATEST,RELEASE).
- Enforce with Maven Enforcer:
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-enforcer-plugin</artifactId>
  <version>3.4.1</version>
  <executions>
    <execution>
      <goals><goal>enforce</goal></goals>
      <configuration>
        <rules>
          <requireUpperBoundDeps/>
          <banDuplicatePomDependencyVersions/>
          <requireReleaseDeps/>
        </rules>
        <fail>true</fail>
      </configuration>
    </execution>
  </executions>
</plugin>Prefer reproducible builds: configure checksums and repository mirrors with strict checksumPolicy=fail in your corporate Nexus/Artifactory proxy.
References:
Version Cooldown
Prefer update bots with stability windows rather than ad-hoc proxy scripts:
Renovate configuration for Java:
// renovate.json
{
  "extends": ["config:base"],
  "dependencyDashboard": true,
  "packageRules": [
    {
      "description": "Java default cooldown and PR gating",
      "matchManagers": ["maven", "gradle"],
      "stabilityDays": 7,
      // choose ONE:
      // "prCreation": "status-success",
      "prCreation": "not-pending"
    },
    {
      "description": "Java security updates bypass cooldown",
      "matchManagers": ["maven", "gradle"],
      "matchUpdateTypes": ["security"],
      "stabilityDays": 0
    }
  ]
}Notes:
- See the npm section above for stabilityDaysvsminimumReleaseAge. Same idea here: usestabilityDaysto avoid churn from frequently updated Java libraries.
- Removing "matchManagers" on either rule makes it global across ecosystems. That’s fine if you want uniform behavior everywhere.
References:
Multi-Factor Authentication and Token Hygiene
Maven Central (Sonatype OSSRH):
<!-- ~/.m2/settings.xml -->
<settings>
  <servers>
    <server>
      <id>ossrh</id>
      <username>${env.OSSRH_USERNAME}</username>
      <password>${env.OSSRH_TOKEN}</password>
    </server>
  </servers>
</settings>Enable 2FA on Sonatype:
- Navigate to https://oss.sonatype.org/
- Profile > User Token > Access User Token
- Enable 2FA in account settings
GPG signing for artifacts (strongly recommended):
<!-- pom.xml -->
<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-gpg-plugin</artifactId>
      <version>3.1.0</version>
      <executions>
        <execution>
          <id>sign-artifacts</id>
          <phase>verify</phase>
          <goals>
            <goal>sign</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>Generate and configure GPG key:
# Generate GPG key
gpg --gen-key
# List keys
gpg --list-secret-keys --keyid-format LONG
# Export public key to keyserver
gpg --keyserver keyserver.ubuntu.com --send-keys <KEY_ID>
# Configure Maven to use key
cat >> ~/.m2/settings.xml << EOF
<settings>
  <profiles>
    <profile>
      <id>gpg</id>
      <properties>
        <gpg.executable>gpg</gpg.executable>
        <gpg.keyname><KEY_ID></gpg.keyname>
        <gpg.passphrase>\${env.GPG_PASSPHRASE}</gpg.passphrase>
      </properties>
    </profile>
  </profiles>
</settings>
EOFGitHub Actions with secrets:
# .github/workflows/publish.yml
name: Publish to Maven Central
on:
  release:
    types: [created]
jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-java@v4
        with:
          distribution: 'temurin'
          java-version: '17'
          server-id: ossrh
          server-username: MAVEN_USERNAME
          server-password: MAVEN_PASSWORD
          gpg-private-key: ${{ secrets.GPG_PRIVATE_KEY }}
          gpg-passphrase: MAVEN_GPG_PASSPHRASE
      
      - name: Publish package
        run: mvn --batch-mode deploy
        env:
          MAVEN_USERNAME: ${{ secrets.OSSRH_USERNAME }}
          MAVEN_PASSWORD: ${{ secrets.OSSRH_TOKEN }}
          MAVEN_GPG_PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}Gradle signing configuration:
// build.gradle
plugins {
  id 'maven-publish'
  id 'signing'
}
publishing {
  publications {
    mavenJava(MavenPublication) {
      from components.java
    }
  }
}
signing {
  useGpgCmd() // or useInMemoryPgpKeys(signingKey, signingPassword)
  sign publishing.publications.mavenJava
}References:
Defense in Depth (Level 2)
Sandbox Installation
Container-based Maven builds:
# Dockerfile for isolated Maven build
FROM maven:3.9-eclipse-temurin-17 AS builder
# Create non-root user
RUN groupadd -r builder && useradd -r -g builder builder
WORKDIR /build
# Copy dependency files
COPY  pom.xml ./
COPY  src ./src
# Switch to non-root
USER builder
# Build with dependencies
RUN mvn clean package -DskipTests
# Production image
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY  /build/target/*.jar app.jar
RUN addgroup -S app && adduser -S app -G app
USER app
ENTRYPOINT ["java", "-jar", "app.jar"]Gradle with Docker:
FROM gradle:8.5-jdk17 AS builder
WORKDIR /build
COPY  . .
USER gradle
RUN gradle build --no-daemon
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY  /build/build/libs/*.jar app.jar
RUN addgroup -S app && adduser -S app -G app
USER app
CMD ["java", "-jar", "app.jar"]Network isolation during build:
# Step 1: Download all dependencies with network access
mvn dependency:go-offline
# Step 2: Build without network
docker run --rm \
  --network=none \
  --memory=3g \
  --cpus=4 \
  --pids-limit=200 \
  --security-opt=no-new-privileges \
  -v maven-repo:/root/.m2 \
  -v $(pwd):/build \
  -w /build \
  maven:3.9-eclipse-temurin-17 \
  mvn clean package --offlineResource limit explanations:
- --memory: JVM typically needs 75% of container memory for heap; remaining for native memory
- --cpus: Maven/Gradle can spawn multiple compiler processes; more cores = faster builds
- --pids-limit=200: Java creates more processes than other ecosystems; higher limit needed
- --security-opt=no-new-privileges: Prevents privilege escalation
Java-specific considerations:
- JVM heap tuning: If container has 4GB, set max heap to 3GB using -e MAVEN_OPTS="-Xmx3g"
- Gradle daemon: Consumes ~500MB-1GB baseline memory before build starts
- Multi-module projects: Each module compilation spawns processes; can hit PID limits
- Parallel builds: Gradle's --paralleland Maven's-T 4multiply resource needs
Example with JVM tuning:
# Explicit heap configuration for 4GB container
docker run --rm \
  --network=none \
  --memory=4g \
  --cpus=4 \
  --pids-limit=200 \
  -e MAVEN_OPTS="-Xmx3g -Xms1g" \
  -v maven-repo:/root/.m2 \
  -v $(pwd):/build \
  -w /build \
  maven:3.9-eclipse-temurin-17 \
  mvn clean package --offline
# Gradle with parallel execution
docker run --rm \
  --network=none \
  --memory=4g \
  --cpus=6 \
  --pids-limit=200 \
  -e GRADLE_OPTS="-Xmx3g" \
  -v gradle-cache:/root/.gradle \
  -v $(pwd):/build \
  -w /build \
  gradle:8-jdk17 \
  gradle build --offline --parallelMonitoring:
docker stats
# If you see:
# - MEM USAGE near limit: Increase --memory by 50%
# - CPU % consistently at limit: Increase --cpus
# - Build fails with "Cannot fork": Increase --pids-limitReferences:
Minimize Dependencies
Maven dependency analysis:
# Analyze used vs declared dependencies
mvn dependency:analyze
# Full dependency tree
mvn dependency:tree
# Find duplicate dependencies
mvn dependency:tree -Dverbose
# Unused declared dependencies
mvn dependency:analyze -DignoreNonCompile=true
# Used but undeclared dependencies
mvn dependency:analyze -DfailOnWarning=trueGradle dependency analysis:
# View dependency tree
./gradlew dependencies
# View specific configuration
./gradlew dependencies --configuration runtimeClasspath
# Find insight into dependency resolution
./gradlew dependencyInsight --dependency <dependency-name>
# Use dependency analysis plugin
./gradlew buildHealthGradle dependency analysis plugin:
// build.gradle
plugins {
    id 'com.autonomousapps.dependency-analysis' version '1.28.0'
}
dependencyAnalysis {
    issues {
        all {
            onAny {
                severity('fail')
            }
        }
    }
}Maven versions plugin:
# Check for dependency updates
mvn versions:display-dependency-updates
# Check for plugin updates
mvn versions:display-plugin-updates
# Update dependencies to latest versions (carefully!)
mvn versions:use-latest-versionsSecurity auditing:
# OWASP Dependency Check (Maven)
mvn org.owasp:dependency-check-maven:check
# OWASP Dependency Check (Gradle)
./gradlew dependencyCheckAnalyze
# Snyk
snyk test --file=pom.xml
snyk test --file=build.gradleReferences:
Pre-Installation Vetting
Automated scanning:
# OWASP Dependency Check
mvn org.owasp:dependency-check-maven:check -DfailOnCVSS=7
# Snyk scanning
snyk test --file=pom.xml --severity-threshold=high
# OpenSSF Scorecard (for dependencies' repos)
scorecard --repo=github.com/<org>/<repo>Maven enforcer rules:
<!-- pom.xml -->
<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-enforcer-plugin</artifactId>
      <version>3.4.1</version>
      <executions>
        <execution>
          <id>enforce-rules</id>
          <goals>
            <goal>enforce</goal>
          </goals>
          <configuration>
            <rules>
              <!-- Ban specific dependencies -->
              <bannedDependencies>
                <excludes>
                  <exclude>commons-logging:commons-logging</exclude>
                  <exclude>log4j:log4j:*:*:compile</exclude>
                </excludes>
                <message>Use slf4j instead</message>
              </bannedDependencies>
              
              <!-- Require specific versions -->
              <requireReleaseDeps>
                <message>No SNAPSHOT dependencies allowed</message>
                <onlyWhenRelease>true</onlyWhenRelease>
              </requireReleaseDeps>
            </rules>
          </configuration>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>Generate SBOM:
# CycloneDX Maven plugin
mvn org.cyclonedx:cyclonedx-maven-plugin:makeAggregateBom
# CycloneDX Gradle plugin
./gradlew cyclonedxBom
# SPDX
mvn org.spdx:spdx-maven-plugin:createSPDXManual verification:
# Enforce checksum verification during resolution (fail on mismatch)
mvn -C dependency:get -Dartifact=<groupId>:<artifactId>:<version>
# If a detached signature exists, verify the JAR (not the POM)
# Assumes <artifactId>-<version>.jar and matching .asc are in current dir
gpg --verify <artifactId>-<version>.jar.asc <artifactId>-<version>.jarReferences:
Advanced Hardening (Level 3)
Continuous Monitoring
Maven wrapper verification:
# Verify Maven wrapper hasn't been tampered with
mvn wrapper:wrapper -Dmaven=3.9.6
git diff mvnw mvnw.cmd .mvn/wrapper/maven-wrapper.propertiesDependency update monitoring:
# Regular audit schedule
mvn versions:display-dependency-updates > dependency-updates.txt
git diff HEAD~30 dependency-updates.txt  # Compare with 30 days agoRuntime hardening (modern Java 17/21):
- Disable debug/management interfaces in prod:- Do not set -agentlib:jdwpat all
- JVM arg: -Dcom.sun.management.jmxremote=false
- JVM arg: -XX:+DisableAttachMechanism
 
- Do not set 
- Use container/AppArmor/SELinux profiles to sandbox the JVM process.
- Prefer the Java module system to limit deep reflection on internal packages.
- If using native-image, enable hardened linkers and strip symbols.
Rollback procedures:
# Maven - revert pom.xml
git show HEAD~1:pom.xml > pom.xml
mvn clean install
# Gradle - revert build files and lockfile
git show HEAD~1:build.gradle > build.gradle
git show HEAD~1:gradle.lockfile > gradle.lockfile
./gradlew clean buildReferences:
7.4. Go Modules Ecosystem
Implementation Guidance
Quick Wins (Level 1)
Disable Lifecycle Scripts
Not Applicable: Go modules do not support install-time scripts. All code execution happens at compile time, which is more visible and controllable. This is a significant security advantage of the Go ecosystem.
Why Go is safer:
- No postinstallhooks or similar mechanisms
- Dependencies are source code only (no install-time scripts). Note: CGO can link system libraries, and go generatemay run tools if you invoke it.
- Compilation is explicit and visible
- Determinism is achievable when you lock deps (go.sum or vendor), pin the toolchain, set GOWORK=off, and disable CGO; otherwise env, workspaces, and system libs can introduce variance
Best practice: Verify this remains true by auditing your dependencies:
# Ensure required modules are present locally, then scan (requires: jq)
go mod download
GOWORK=off go list -m -json all \
| jq -r 'select(.Dir != null and .Dir != "") | .Dir' \
| while read -r dir; do
  find "$dir" -maxdepth 2 -type f \( -name "*.sh" -o -name "Makefile" -o -name "*.py" \) -print
done
# Verify no CGO usage anywhere in the dependency graph (module + transitives)
# Works in Linux/macOS; requires a populated module cache
GOWORK=off go list -deps -f '{{if .CgoFiles}}{{.ImportPath}}{{end}}' ./... | sort -u | sed '/^$/d'Treat go generate as untrusted code execution:
While Go has no install-time scripts, go generate can execute arbitrary commands specified in source via //go:generate directives. Best practices:
- Do not run go generateautomatically in CI for third-party code.
- If generation is required, restrict it to vetted, internal packages and run inside a locked-down container.
- Review //go:generatelines in PRs:git grep -n '^//go:generate' -- ':!vendor/*'
- List declared generators without executing them:go list -f '{{.Dir}}' ./... \ | while read -r d; do sed -n 's|^//go:generate |GO_GENERATE: |p' "$d"/*.go 2>/dev/null done
References:
Lock Dependencies
go.mod with exact versions:
# go.mod specifies versions
# go.sum provides cryptographic checksums
# Initialize module
go mod init example.com/myproject
# Add dependency with specific version
go get github.com/gin-gonic/gin@v1.9.1
# Tidy dependencies (remove unused, add missing)
go mod tidy -v
# Verify dependencies match go.sum
go mod verify
# Download all dependencies
go mod downloadgo.mod with exact versions:
// go.mod
module example.com/myproject
go 1.21
require (
    github.com/gin-gonic/gin v1.9.1
    github.com/lib/pq v1.10.9
    go.uber.org/zap v1.26.0
)
// Avoid pseudo-versions or branches:
// github.com/some/package v0.0.0-20231015123456-abcdef123456  // ✗ Pseudo-version
// github.com/some/package master  // ✗ Branch referencego.sum locks ALL modules (direct + transitive):
Go's go.sum file contains cryptographic checksums for your entire module graph, not just direct dependencies.
# go.sum contains checksums for ALL modules you depend on
# Example entries:
# Direct dependency
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
# Transitive dependencies (pulled in by gin)
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg/+t63MyGU2n5js=
github.com/go-playground/validator/v10 v10.14.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
# Each module has TWO checksums:
# - h1:... = checksum of the module's zip file
# - go.mod h1:... = checksum of the module's go.mod fileUnderstanding go.sum security guarantees and limitations:
What go.sum DOES protect against:
- Registry tampering (module content changed after publication)
- Man-in-the-middle attacks during download
- Corrupted or incomplete downloads
- Silent updates to dependency versions
- Different versions served to different users (via sum.golang.org transparency log)
What go.sum does NOT protect against:
- Compromised maintainer accounts publishing malicious code
- Legitimate maintainers intentionally publishing malicious versions
- Initial installation of a malicious package (go.sum locks whatever version you first download)
Critical limitation: go.sum verifies content integrity (you get the exact code that was published) but NOT authorization (whether the publisher was legitimate). If an attacker compromises a maintainer account and publishes malicious v1.2.3, go.sum will faithfully lock to that malicious version and verify its integrity on every subsequent build.
Defense in depth: While go.sum cannot prevent compromised-maintainer attacks, it does provide important protections:
- sum.golang.org transparency log makes targeted attacks harder
- Immutable checksums prevent post-publication tampering
- Reproducible builds ensure you get the same code as your teammates
- Combine with other controls: version cooldown, pre-installation vetting, monitoring
Why this matters: Even though gin-gonic/gin doesn't pin its own dependencies in its go.mod, YOUR go.sum ensures you always get the exact versions of ALL transitive dependencies that you tested. If gin's dependencies change, go mod verify will detect the mismatch.
Verify all checksums (including transitives):
# Verify ALL modules match go.sum (fails if anything doesn't match)
go mod verify
# Output on success:
# all modules verified
# Output on failure:
# github.com/some/package v1.2.3: checksum mismatch
#   downloaded: h1:abc...
#   go.sum:     h1:xyz...If checksums don't match, go will refuse to build:
go build  # Automatically verifies before building
# Example error if go.sum is tampered with:
# verifying github.com/gin-gonic/gin@v1.9.1: checksum mismatch
#   downloaded: h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
#   go.sum:     h1:WRONGCHECKSUMabcdef123456789How Go protects against transitive attacks:
// Your go.mod
module example.com/myapp
require github.com/gin-gonic/gin v1.9.1
// gin's go.mod (in their repository)
module github.com/gin-gonic/gin
require (
    github.com/gin-contrib/sse v0.1.0     // They declare minimum compatible versions (MVS), not exact locks
    github.com/go-playground/validator/v10 v10.14.0
)
// Your go.sum locks the EXACT versions resolved:
github.com/gin-contrib/sse v0.1.0 h1:...  // Locked to 0.1.0, even though gin allows >=0.1.0Result: Even if a malicious v0.1.1 of gin-contrib/sse is published, your builds use v0.1.0 because that's what's in go.sum.
Vendor dependencies (optional, for extra control):
# Copy all dependencies to vendor/ directory
go mod vendor
# Build using vendor directory (never touches network)
go build -mod=vendor
# Commit vendor/ to version control for guaranteed reproducibility
git add vendor/
git commit -m "Vendor dependencies"CI/CD enforcement:
# GitHub Actions
name: Verify Dependencies
on: [pull_request]
jobs:
  verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: '1.21'
      - name: Set deterministic module env
        run: |
          echo "GOPROXY=https://proxy.golang.org,direct" >> $GITHUB_ENV
          echo "GOSUMDB=sum.golang.org" >> $GITHUB_ENV
          echo "GOWORK=off" >> $GITHUB_ENV   # avoid accidental go.work influence in monorepos
      - name: Verify go.mod and go.sum are tidy
        run: |
          go mod tidy
          git diff --exit-code go.mod go.sum
      - name: Verify checksums
        run: go mod verify
      - name: Enforce read-only module mode during build
        run: GOFLAGS="-mod=readonly" go build ./...
      - name: Run tests with read-only module mode
        run: GOFLAGS="-mod=readonly" go test ./...
      - name: Check for pseudo-versions (all forms, direct + indirect)
        run: |
          # Detect any version matching Go pseudo-version patterns:
          #  - vX.0.0-yyyymmddhhmmss-abcdef123456
          #  - vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdef123456
          # See: https://go.dev/ref/mod#pseudo-versions
          GOWORK=off go list -m -json all \
          | jq -r '.Version // empty' \
          | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+-([0-9]|[a-z][a-z0-9]*\.[0-9]+)\.[0-9]{14}-[0-9a-f]{12}$|^v[0-9]+\.0\.0-[0-9]{14}-[0-9a-f]{12}$' \
          && { echo "::error::Pseudo-versions detected, use tagged releases"; exit 1; } || true
      - name: Forbid unsafe replace directives (local paths)
        run: |
          # Disallow local filesystem replaces unless explicitly whitelisted.
          # Allowed example: internal forks mirrored under your VCS domain.
          if grep -E '^\s*replace\s' go.mod; then
            # Matches "=> ./", "=> ../", absolute paths, or tilde
            local_replaces=$(grep -E '^\s*replace\s' go.mod | grep -E '=>\s*(\./|\../|/|~)') || true
            if [ -n "$local_replaces" ]; then
              echo "::error::Local path replace directives detected:"
              echo "$local_replaces"
              echo "If intentional, gate them behind a CI allowlist."
              exit 1
            fi
          fi
        
      - name: Forbid replace directives to unapproved hosts
        run: |
          # Extract "replace A => B" and check the right-hand side for unapproved hosts.
          # Allowlist your org TLDs and common VCS hosts you trust.
          approved='(github\.com|gitlab\.com|bitbucket\.org|your-company\.com)'
          if grep -E '^\s*replace\s' go.mod; then
            remote_replaces=$(sed -n 's/^\s*replace\s\+\S\+\s\+=>\s\+\(\S\+\).*$/\1/p' go.mod \
                              | grep -Ev '^(?:\./|\../|/|~)' \
                              | grep -Ev "$approved") || true
            if [ -n "$remote_replaces" ]; then
              echo "::error::Replace to unapproved hosts detected:"
              echo "$remote_replaces"
              exit 1
            fi
          fi
      - name: Ensure go.sum is committed
        run: |
          if ! git ls-files --error-unmatch go.sum > /dev/null 2>&1; then
            echo "::error::go.sum is not committed to git"
            echo "go.sum must be committed to ensure reproducible builds"
            exit 1
          fi
      - name: Install govulncheck
        run: go install golang.org/x/vuln/cmd/govulncheck@latest
      - name: Run govulncheck
        run: govulncheck ./...References:
Version Cooldown
Using Go module proxy with caching:
# Configure custom module proxy
export GOPROXY=https://proxy.company.com,https://proxy.golang.org,direct
# Or use Athens proxy with custom policies
# Athens can implement time-based filteringAthens proxy with cooldown (self-hosted):
# athens-config.toml
[storage]
type = "disk"
[download]
mode = "sync"  # Can implement custom filtering here
# Custom filter plugin (pseudocode)
[filter]
min_age_hours = 168  # 7 daysRenovate configuration for Go:
// renovate.json
{
  "extends": ["config:base"],
  "dependencyDashboard": true,
  "packageRules": [
    {
      "description": "Go default cooldown and PR gating",
      "matchManagers": ["gomod"],
      "stabilityDays": 7,
      // "prCreation": "status-success",
      "prCreation": "not-pending"
    },
    {
      "description": "Go security updates bypass cooldown",
      "matchManagers": ["gomod"],
      "matchUpdateTypes": ["security"],
      "stabilityDays": 0
    }
  ]
}Notes:
- See npm section above for detailed explanation of stabilityDaysvsminimumReleaseAge. The same principles apply here — usestabilityDaysto avoid whiplash from frequently updated Go modules.
- Removing "matchManagers" on either rule makes it global across ecosystems. That’s fine if you want uniform behavior everywhere.
Custom validation script:
#!/bin/bash
# scripts/check-go-module-age.sh
# NOTE: Requires GNU coreutils (Linux). On macOS, install coreutils or run in CI on Linux.
MODULE=$1
VERSION=$2
MIN_DAYS=${3:-7}
# Dependencies: curl, jq, GNU date
if ! command -v curl >/dev/null || ! command -v jq >/dev/null; then
  echo "ERROR: requires curl and jq"; exit 2
fi
# Resolve proxy-escaped path per https://go.dev/ref/mod#goproxy-protocol
_escape_goproxy_path() {
  local in="$1" out="" ch
  # First, escape '!' as '!!'
  in="${in//!/!!}"
  # Then map A-Z to !a-!z
  for ((i=0; i<${#in}; i++)); do
    ch="${in:$i:1}"
    if [[ "$ch" =~ [A-Z] ]]; then
      out+="!${ch,}"
    else
      out+="$ch"
    fi
  done
  printf '%s' "$out"
}
ESCAPED=$(_escape_goproxy_path "${MODULE}")
INFO=$(curl -sfL "https://proxy.golang.org/${ESCAPED}/@v/${VERSION}.info") || {
  echo "ERROR: failed to fetch version info for ${MODULE}@${VERSION}"; exit 3; }
TIME=$(echo "$INFO" | jq -r '.Time')
[ -n "$TIME" ] || { echo "ERROR: missing release time in proxy response"; exit 4; }
# Calculate age
RELEASE_TS=$(date -d "$TIME" +%s)
NOW_TS=$(date +%s)
AGE_DAYS=$(( ($NOW_TS - $RELEASE_TS) / 86400 ))
if [ $AGE_DAYS -lt $MIN_DAYS ]; then
    echo "ERROR: $MODULE@$VERSION is only $AGE_DAYS days old (minimum: $MIN_DAYS)"
    exit 1
fi
echo "OK: $MODULE@$VERSION is $AGE_DAYS days old"Pre-commit hook:
#!/bin/bash
# .git/hooks/pre-commit
# Check all dependencies for minimum age
GOWORK=off go list -m -json all \
| jq -r 'select(.Version != null and .Version != "") | .Path + "@" + .Version' \
| while read -r dep; do
  ./scripts/check-go-module-age.sh $(echo "$dep" | tr '@' ' ') 7
doneReferences:
Multi-Factor Authentication and Token Hygiene
Note: Go doesn't have a central package registry like npm or PyPI. Authentication depends on where modules are hosted (GitHub, GitLab, Bitbucket, etc.).
GitHub (most common):
# SSH with properly scoped deploy keys or org SSO
git config --global url."git@github.com:".insteadOf "https://github.com/"
# In CI, use the built-in token via actions/checkout (avoids storing PAT in git config)
# .github/workflows/go-build.yml (excerpt)
# - uses: actions/checkout@v4
#   with:
#     fetch-depth: 0
#     persist-credentials: true   # Uses ephemeral GITHUB_TOKEN
# Enable 2FA on GitHub account
# Navigate to Settings > Password and authentication > Two-factor authenticationPrivate module configuration:
# Configure private module prefix
go env -w GOPRIVATE=github.com/mycompany/*
# Use .netrc for authentication
cat > ~/.netrc << EOF
machine github.com
login <username>
password <personal_access_token>
EOF
chmod 600 ~/.netrcGitHub Actions with Go:
# .github/workflows/go-build.yml
name: Build
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
          persist-credentials: true  # uses ephemeral GITHUB_TOKEN automatically
      - uses: actions/setup-go@v5
        with:
          go-version: '1.21'
      - name: Set deterministic module env
        run: |
          echo "GOPROXY=https://proxy.golang.org,direct" >> $GITHUB_ENV
          echo "GOSUMDB=sum.golang.org" >> $GITHUB_ENV
          echo "GOWORK=off" >> $GITHUB_ENV
      - name: Configure private modules
        run: |
          go env -w GOPRIVATE=github.com/mycompany/*
      
      - name: Build
        run: go build -v ./...GitLab private modules:
# Configure GitLab token
git config --global url."https://oauth2:${GITLAB_TOKEN}@gitlab.com/".insteadOf "https://gitlab.com/"
# Or use .netrc
cat > ~/.netrc << EOF
machine gitlab.com
login oauth2
password <gitlab_token>
EOFModule checksum verification:
# Go automatically verifies checksums against sum.golang.org
# For private modules, prefer disabling the public sumdb for your namespaces:
go env -w GONOSUMDB=github.com/mycompany/*
# Operating a private checksum DB requires a compatible server and verifier key configuration.
# Omit GOSUMDB unless you actually run one and distribute its public key.
References:
Defense in Depth (Level 2)
Sandbox Installation
Container-based Go builds:
# Multi-stage build with minimal permissions
FROM golang:1.21-alpine AS builder
# Create non-root user
RUN addgroup -g 1001 -S builder && \
    adduser -u 1001 -S builder -G builder
WORKDIR /build
# Copy go mod files
COPY  go.mod go.sum ./
# Download dependencies as non-root
USER builder
RUN go mod download && go mod verify
# Copy source
COPY  . .
# Build
RUN CGO_ENABLED=0 go build -o app .
# Production image
FROM alpine:latest
RUN addgroup -g 1001 -S app && \
    adduser -u 1001 -S app -G app
WORKDIR /app
COPY  /build/app .
USER app
ENTRYPOINT ["./app"]Distroless for production (even more secure):
FROM golang:1.21 AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o app .
# Distroless has no shell, package manager, or unnecessary tools
FROM gcr.io/distroless/static-debian12:nonroot
WORKDIR /
COPY  /build/app /app
ENTRYPOINT ["/app"]Build with network isolation (after initial download):
# Download all dependencies first
go mod download
# Build without network access
docker run --rm \
  --network=none \
  --memory=1g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/build \
  -v go-mod-cache:/go/pkg/mod \
  -w /build \
  golang:1.21-alpine \
  go build -mod=readonly -o app .
# Optionally: also run tests under readonly mode
docker run --rm \
  --network=none \
  --memory=1g \
  --cpus=2 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/build \
  -v go-mod-cache:/go/pkg/mod \
  -w /build \
  golang:1.21-alpine \
  go test -mod=readonly ./...Resource limit explanations:
- --memory: Go compiler is efficient; most projects need <1GB
- --cpus: Go compiles packages in parallel; more cores = faster builds
- --pids-limit: Go creates fewer processes than Java/Node; 100 is sufficient
- --security-opt=no-new-privileges: Prevents privilege escalation
Go-specific considerations:
- Compiled language advantage: No runtime interpretation means lower memory needs than Python/Node.js
- Module cache: Pre-downloading modules (go mod download) reduces build-time memory
- Parallel compilation: Use -p Nflag to control parallelism (default: GOMAXPROCS)
- CGO overhead: C compiler integration can triple memory usage
- Alpine vs Debian images: Alpine images are smaller but may need more memory for musl libc
Examples for different scenarios:
# Large codebase with explicit parallelism
docker run --rm \
  --network=none \
  --memory=2g \
  --cpus=4 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/build \
  -v go-mod-cache:/go/pkg/mod \
  -w /build \
  golang:1.21-alpine \
  go build -mod=readonly -p 4 -o app .
# CGO build (needs Debian, not Alpine, and more memory)
docker run --rm \
  --network=none \
  --memory=2g \
  --cpus=4 \
  --pids-limit=100 \
  --security-opt=no-new-privileges \
  -v $(pwd):/build \
  -v go-mod-cache:/go/pkg/mod \
  -w /build \
  golang:1.21 \
  go build -mod=readonly -o app .Monitoring:
docker stats
# Go builds are typically:
# - Memory: Steady, predictable
# - CPU: Spiky during compilation phases
# - Time: 2-4x faster with more CPUs
# If build fails with OOM:
# 1. Check for memory leaks in your code
# 2. Reduce -p value (less parallelism)
# 3. Only then increase --memoryWhy Go needs less than other ecosystems:
- No JVM overhead (Java)
- No V8 engine (Node.js)
- No interpreter (Python)
- No dynamic linking complexity
- Efficient garbage collector
Verify no CGO (prevents arbitrary code execution):
# Build without CGO
CGO_ENABLED=0 go build -o app .
# Verify static binary (no dynamic libs expected)
file app | grep -qi "statically linked"
# Check for CGO in the entire dependency graph
GOWORK=off go list -deps -f '{{if .CgoFiles}}{{.ImportPath}}{{end}}' ./... | sort -u | sed '/^$/d'References:
Minimize Dependencies
Analyze dependency tree:
# List all dependencies (ignore go.work)
GOWORK=off go list -m all
# View dependency graph
go mod graph
# Pretty-print with go-mod-graph-chart
go install github.com/nikolaydubina/go-mod-graph-chart@latest
go mod graph | go-mod-graph-chart -stdout > graph.html
# Find why a dependency is included
go mod why -m github.com/some/package
# List direct vs indirect dependencies
GOWORK=off go list -m -json all | jq 'select(.Indirect != true) | .Path'Remove unused dependencies:
# Automatically remove unused dependencies
go mod tidy
# Verify no unused dependencies remain
go mod tidy && git diff go.mod go.sum
# Identify direct requirements that aren't needed: rely on tidy + diff
before_mod=$(mktemp); before_sum=$(mktemp)
cp go.mod "$before_mod"; cp go.sum "$before_sum"
go mod tidy
if ! git diff --no-index --quiet "$before_mod" go.mod || ! git diff --no-index --quiet "$before_sum" go.sum; then
  echo "Some direct requirements appear unused (tidy changed files). Review diffs above."
fi
rm -f "$before_mod" "$before_sum"Audit for vulnerabilities:
# Using govulncheck (official Go vulnerability scanner)
go install golang.org/x/vuln/cmd/govulncheck@latest
govulncheck ./...
# Using Nancy (Sonatype)
GOWORK=off go list -json -m all | nancy sleuth
# Using Snyk
snyk test --file=go.modCheck dependency licenses:
# Install go-licenses
go install github.com/google/go-licenses@latest
# List all licenses
# Human-readable report (no custom template required)
go-licenses report ./...
# Machine-readable CSV
go-licenses csv ./... > licenses.csv
# Find non-permissive licenses
go-licenses check ./... --disallowed_types=forbidden,restrictedEvaluate dependency quality:
# Check repository metadata
go list -m -json github.com/some/package | jq .
# View package documentation
go doc github.com/some/package
# Check for recent updates
go list -m -versions github.com/some/packageReferences:
Pre-Installation Vetting
Automated security scanning:
# govulncheck for known vulnerabilities
govulncheck ./...
# gosec for security issues in code
go install github.com/securego/gosec/v2/cmd/gosec@latest
gosec ./...
# staticcheck for code quality
go install honnef.co/go/tools/cmd/staticcheck@latest
staticcheck ./...
# OpenSSF Scorecard for dependency repos
scorecard --repo=https://github.com/<org>/<repo>Verify module authenticity:
# Verify checksums via go.sum
go mod verify
# Check module provenance
go mod download -json github.com/some/package@v1.2.3 | jq .
# Verify module comes from the expected VCS host (private modules)
# Fetch directly from VCS by bypassing proxy and sumdb for the expected pattern.
GOPRIVATE=github.com/your-org/* \
GONOSUMDB=github.com/your-org/* \
GOPROXY=direct \
go mod download -json github.com/your-org/some/package@v1.2.3 | jq -r '.Path+" "+.Version+" "+.Info'
# Optional: assert the repo exists and is reachable over SSH
git ls-remote git@github.com:your-org/some/package.git >/dev/nullInspect module contents before use:
# Download without installing
go mod download github.com/some/package@v1.2.3
# Find downloaded location
MODCACHE=$(go env GOMODCACHE)
PKG_PATH="$MODCACHE/github.com/some/package@v1.2.3"
# Inspect contents
ls -la "$PKG_PATH"
cat "$PKG_PATH/go.mod"
# Search for suspicious patterns
grep -r "exec\|syscall\|unsafe" "$PKG_PATH"Generate SBOM:
# Using syft
go install github.com/anchore/syft/cmd/syft@latest
syft packages . -o cyclonedx-json > sbom.json
# Using CycloneDX Go
go install github.com/CycloneDX/cyclonedx-gomod/cmd/cyclonedx-gomod@latest
cyclonedx-gomod app -json -output sbom.json
# SBOM covering only module dependencies (no compiled app metadata)
cyclonedx-gomod mod -json -output sbom-mod.jsonPre-commit validation:
#!/bin/bash
# .git/hooks/pre-commit
set -e
echo "Running Go security checks..."
# Verify checksums
go mod verify
# Check for vulnerabilities
govulncheck ./...
# Run security scanner
gosec -quiet ./...
# Ensure go.mod and go.sum are tidy
go mod tidy
if ! git diff --exit-code go.mod go.sum; then
    echo "Error: go.mod or go.sum not tidy"
    exit 1
fi
echo "All checks passed!"References:
Advanced Hardening (Level 3)
Continuous Monitoring
Monitor for dependency updates:
# Check for available updates
go list -m -u all
# Monitor specific package
go list -m -versions github.com/some/package
# Generate update report
go list -m -u -json all | jq -r 'select(.Update) | "\(.Path): \(.Version) -> \(.Update.Version)"'Automated vulnerability scanning:
# .github/workflows/security-scan.yml
name: Security Scan
on:
  schedule:
    - cron: '0 0 * * *'  # Daily
  push:
    paths:
      - 'go.mod'
      - 'go.sum'
jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: '1.21'
      - name: Set deterministic module env
        run: |
          echo "GOPROXY=https://proxy.golang.org,direct" >> $GITHUB_ENV
          echo "GOSUMDB=sum.golang.org" >> $GITHUB_ENV
          echo "GOWORK=off" >> $GITHUB_ENV
      - name: Run govulncheck
        run: |
          go install golang.org/x/vuln/cmd/govulncheck@latest
          govulncheck ./...
      - name: Sanity tests (read-only modules)
        run: GOFLAGS="-mod=readonly" go test ./...
      
      - name: Run gosec
        run: |
          go install github.com/securego/gosec/v2/cmd/gosec@latest
          gosec -fmt=json -out=gosec-report.json ./...
      
      - name: Check for updates with vulnerabilities
        run: |
          go list -m -u -json all | \
          jq -r 'select(.Update) | "\(.Path) \(.Version) -> \(.Update.Version)"' \
          > updates.txt
          if [ -s updates.txt ]; then
            echo "Dependencies with updates available:"
            cat updates.txt
          fiRuntime monitoring (if applicable):
// Example: Monitor module loading at runtime
package main
import (
    "runtime/debug"
    "log"
)
func init() {
    // Log all loaded modules
    if info, ok := debug.ReadBuildInfo(); ok {
        log.Printf("Main module: %s@%s", info.Main.Path, info.Main.Version)
        for _, dep := range info.Deps {
            log.Printf("Dependency: %s@%s", dep.Path, dep.Version)
        }
    }
}Dependency diff:
# Compare go.mod between commits
git diff HEAD~1 go.mod
# Detailed comparison with explanations
git show HEAD~1:go.mod > go.mod.old
git show HEAD:go.mod > go.mod.new
diff -u go.mod.old go.mod.newRollback procedures:
# Revert go.mod and go.sum
git show HEAD~1:go.mod > go.mod
git show HEAD~1:go.sum > go.sum
# Verify and rebuild
go mod verify
go build -v ./...
# Or use specific version
go get github.com/some/package@v1.2.2  # Downgrade from v1.2.3
go mod tidyReferences:
8. References and Resources
Official Documentation & Security Policies
- npm Security Best Practices - Official npm security guidance
- Python Packaging Security - Python packaging authority security guide
- Maven Security - Apache Maven security model and CVE tracker
- Go Security Policy - Official Go vulnerability reporting and disclosure process
Security Scanning & Analysis Tools
- Socket.dev - Multi-ecosystem supply chain security platform
- Snyk - Multi-ecosystem vulnerability scanning and remediation
- OWASP Dependency Check - Dependency vulnerability scanner (Java, JavaScript, Python, Ruby, Node.js)
- govulncheck - Official Go vulnerability scanner
- pip-audit - Python vulnerability scanner
- Bandit - Python SAST tool for security issues
- OpenSSF Scorecard - Repository security health assessment
Research Reports & Statistics
- arXiv: Pinning is Futile (Ghoshal et al., February 2025) - Academic research on transitive dependency attacks and lockfile necessity
- The Register: Socket npm Statistics (March 2023) - Average npm package has 79 transitive dependencies
- Phylum: Hidden Dependencies in Software Networks (April 2023) - Research showing median 751 total dependencies per package
- Verizon 2025 Data Breach Investigations Report - Third-party breaches now 30% of all incidents
- IBM Cost of a Data Breach Report 2025 - Average breach costs $4.44M globally, $10.22M in U.S.
- Ivanti 2025 State of Cybersecurity Report - 75% experienced supply chain attacks, only 33% feel prepared
- Sonatype 2024 State of the Software Supply Chain - Annual report covering Java, JavaScript, Python, and .NET ecosystems
- Red Hat: Software Supply Chain Security Maturity 2024 - Organizational maturity assessment
Incident Analysis & Threat Intelligence
- Cyble Supply Chain Attack Analysis (June 2025) - 25% increase in attack rate, trending analysis
- Kaspersky Supply Chain Attacks Review 2024 - Major 2024 incidents including XZ Utils
- CISA: npm Ecosystem Supply Chain Compromise Alert - Official U.S. government alert on Shai-Hulud campaign
- Sonatype Threat Research - Real-time malicious package tracking
- Phylum Q3 2023 Supply Chain Security Report - Cross-ecosystem monitoring (npm, PyPI, RubyGems, Maven, Crates.io, Go)
- npm Security Advisories - Official npm vulnerability database
- GitHub Advisory Database - Cross-ecosystem security advisories
- PyPI Security Incidents - Python package security reporting
Ecosystem-Specific Guidance
npm/Node.js:
- GitHub: Securing the npm Supply Chain - Official GitHub response to supply chain attacks
- OpenSSF: npm Supply Chain Best Practices - Community-vetted npm security practices
- Snyk: NPM Security Guide - Comprehensive npm attack prevention
- Endor Labs: Defending Against npm Attacks - Practical npm defense strategies
Go:
- CroCoder: Go Supply Chain Attack Analysis - Go-specific supply chain risks
Cross-Ecosystem:
- The Hacker News: Entry Point Vulnerabilities - Attack vectors across Python and npm
- Black Duck: npm Attack Lessons Learned - Security implications analysis
- Mend.io: Self-Spreading Malware Analysis - Technical analysis of 187 compromised packages
Frameworks & Standards
- SLSA (Supply-chain Levels for Software Artifacts) - Industry framework for supply chain security maturity levels
- FINOS: Open Source Supply Chain Security - Institutional landscape and emerging legislation
Community & Open Source Organizations
- OpenSSF (Open Source Security Foundation) - Linux Foundation project for open source security
- OWASP Dependency-Check Project - Community-driven dependency security
- Node.js Security Working Group - Official Node.js security team
- Python Security Response Team - Python vulnerability coordination
Educational Resources
- Checkmarx: Software Supply Chain Security Guide - Comprehensive educational resource
- Oligo Security: 2025 Supply Chain Security Guide - Modern threats and defenses
- Endor Labs Supply Chain Research - Technical deep-dives and case studies
- Chainguard Security Blog - Supply chain security insights and analysis
Enjoyed this article? Share it with your network!