Software Supply Chain Threat Modeling Checklist

December 15, 2022 | By IANS Faculty

This checklist is designed to help you threat model third-party software and services as they are deployed into the environment. It broadly follows the Microsoft Security Development Lifecycle (SDL) but applies a supply chain security focus to the recommendations. Normally, the Microsoft SDL is used to guide the organization’s security model during product development. This checklist focuses on using the SDL to threat model third-party software cybersecurity risk.

We suggest using this checklist as a guide as security teams will have additional organization-specific steps to consider in their threat modeling evaluations.

Provide Training Focusing on:

  • API security: Ensure stakeholders understand the risks of using third-party APIs, particularly around availability.
  • Secure procurement practices: The organization must only procure third-party components from organizations that have completed a risk assessment that aligns with the organization’s risk tolerance.
  • Evaluating risk questionnaires: The organization should have a standard for evaluating questionnaires that includes minimum standards of acceptable security practices.

Define Security Requirements:

  • Identify regulatory frameworks: If the third-party component stores, processes or transmits data bound by regulatory frameworks, ensure relevant risk assessments and compliance attestations (as applicable) have been completed.
  • Determine SDL use:
    • Ensure the third party developed the component using a formal SDL.
    • If an SDL was not used, place additional focus on SAST, DAST and penetration testing results of the component.
  • Enumerate contractual requirements: Contractual requirements may impact the organization’s flexibility in choosing components. For instance, some organizations have contractual restrictions on procuring software developed in some foreign countries.

READ: How to Build a Third-Party Risk Management Framework

Define Metrics and Compliance Reporting:

  • Define internal metrics: Define and create meaningful security metrics to measure the security of third-party components both preacquisition and post-implementation.
  • Identify security metrics used by third parties: Identify the metrics used by the third party to measure the security of the components being acquired.
  • Review the third party’s bug bar definition: If the third party uses a formal SDL, there should be a bug bar for the organization that defines the most severe bugs that will ship. Ensure this aligns with your organization’s security standards.
  • Ensure SaaS offerings meet compliance requirements:
    • Determine whether SaaS offerings are used either directly or indirectly in the service offering (if applicable).
    • Ensure applicable compliance requirements identified in Phase 2 of the SDL are met.

Perform Threat Modeling:

  • Define how each third-party component will be implemented:
    • Define the use case for each third-party component that will be deployed to ensure trust boundaries can be properly enumerated.
    • Assess the fitness of the component for the role it is expected to fill in the system/application.
  • Define trust boundaries for third-party components: Each third-party component being deployed in the network or as part of an application must have trust boundaries defined to ensure they are evaluated against threat modeling frameworks such as STRIDE (spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege).
  • Build a formal data flow diagram: While data flow diagrams have fallen out of favor with agile development, they are an invaluable tool for threat modeling. It is impossible to comprehensively threat model an application or system without a complete picture of data flows.
  • Validate how data is passed: Evaluate whether components pass data between trust boundaries in a manner commensurate with the organization’s risk tolerance.
  • Identify compensating controls: If unacceptable risks introduced by a third-party component or system are discovered via threat modeling, identify possible compensating controls.
  • Assess whether the third party uses formal threat modeling: If so, request a copy of the threat model (at least, at a high level) to understand what risks the component introduces and to gain a better understanding of the third party’s security practices.

Establish Design Requirements:

  • Determine authentication methods:
    • Identify the authentication mechanisms used by the system or component.
    • Determine whether they are sufficiently aligned with the organization’s baseline security posture.

  • Determine logging posture:
    • Identify what logging is supported by the system or component.
    • Determine whether it will be sufficient to investigate incidents or meet regulatory requirements.
    • Additionally, identify whether log generation increases overhead (e.g., debug-only logging) and whether logs are only available at certain licensing levels.

  • Assess design patterns used:
    • Evaluate the design patterns used to build the system or component.
    • Determine how these may impact the threat model

Define and Use Cryptography Standards:

  • Ensure outdated cryptographic standards are not used: Because outdated cryptographic standards and cipher suites are used in many systems and components, organizations must ensure outdated standards either are not used in third-party components or are accepted as a risk.
  • Review legal restrictions: Export control requirements may limit how and where third-party systems and components that use applicable cryptographic standards can be sold.
  • Audit cryptographic implementations or determine if the third party already obtained such an audit from a qualified assessor. Cryptographic algorithms are only as secure as their implementation.
  • Ensure only established cryptographic standards and libraries are used: Even when established standards are used, flaws can be trivially introduced when developers build libraries.

Ensure Usage of Approved Tools:

  • Ensure components are built using modern toolchains: Modern toolchains (compilers, linkers, etc.) and libraries support security features, such as address space layout randomization, stack canaries, etc. Ensure only modern toolchains are used in the development of the component or system.
  • Audit components to ensure compile-time security settings are enabled: Ensure all relevant security settings are enabled at compile time. This step must be repeated with each new release because vendors often disable security settings (likely, for troubleshooting purposes) in later releases.
  • Provide preference to code built with integrated development environments (IDEs) that support SAST integration: IDEs that support SAST integration provide superior security return on investment versus using SAST to scan code at check-in or after development is complete.

Perform SAST:

  • Run SAST independently against components deployed with source code: If significant vulnerabilities (e.g., injection) are discovered by SAST, seriously consider replacing the component, because this is evidence an SDL is not being used by the third party.
  • Obtain SAST reports for compiled third-party components: Evaluate them to ensure they align with your organization’s risk tolerance. Because you are evaluating residual risk, SAST reports from the third party need not include mitigated vulnerabilities discovered.
  • Compare obtained SAST reports to observables in compiled components: This is primarily an audit function. Ensuring observed functionality aligns with SAST reports provides some confidence that SAST reports are accurate and complete. For instance, if injection flaws are observed in testing, but not listed in SAST reports, this discrepancy should be reported.

Perform DAST:

  • Run DAST against components and applications:
    • Run your organization’s own DAST tooling against components (build novel applications as needed for libraries).
    • If significant vulnerabilities (e.g., injection) are discovered by DAST, seriously consider replacing the component, because this is evidence an SDL is not being used by the third party. You should not be discovering high-severity vulnerabilities with DAST if a functioning SDL is in place.
  • Obtain DAST reports for compiled third-party components: Evaluate them to ensure they align with your organization’s risk tolerance. Because you are evaluating residual risk, DAST reports from the third party need not include mitigated vulnerabilities discovered.
  • Compare obtained DAST reports to observables in components: This is primarily an audit function. Ensuring observed functionality is aligned with DAST reports provides some confidence DAST reports are accurate and complete. For instance, if injection flaws are observed in testing, but not listed in DAST reports, this discrepancy should be reported and evaluated as high risk.

Perform Penetration Testing:

  • Ensure the third party has already performed a penetration test
  • The provider should not expect you to perform a penetration test on its software or components. The provider of those components should have already performed penetration testing prior to their apps or components being considered for acquisition/integration. This may be waived for open source components.
  • Review penetration testing reports: Identify what penetration testers have discovered, including issues that have been remediated. Because penetration tests should only be performed after SAST and DAST findings have been remediated (or risks accepted), there is value in examining even those penetration testing findings that have been remediated because they highlight issues in earlier stages of the SDL.
  • Perform an independent penetration test: It should go without saying, but the acquiring organization should perform its own penetration test. The sad reality is that there is little standardization on what passes for a penetration test. Acquiring organizations should always evaluate risk with a trusted testing firm (or use trusted in-house assets).

READ: Top Strategies for Identifying Software Supply Chain Risks

Evaluate the Vendor’s Incident Response Process:

  • Determine whether the third party has a public vulnerability reporting policy:
    • Identify how someone not connected to the supplier would disclose a discovered vulnerability.
    • Those without a public disclosure policy should be avoided.
  • Discuss the vendor’s incident response process, including their timeline for addressing vulnerabilities, process for communication with impacted parties, etc.
  • Research previously disclosed vulnerabilities:
    • Understand the history of vulnerabilities in the component.
    • Pay particular attention to the same class of vulnerability discovered over time. One group of multiple command injection vulnerabilities is less concerning than the same number of vulnerabilities reported and patched over a one-year period (which indicates developers are not addressing root causes).

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.


Access time-saving tools and helpful guides from our Faculty.