Secure Software Development Fundamentals

The below notes are from the free Linux Foundation Secure Software Development: Requirements, Design, and Reuse (LFD104x) and Secure Software Development: Verification and More Specialized Topics (LFD106x).

Security requirements are often divided into three broad objectives (plus one more):

  • Confidentiality: “No unauthorized read” - users are only allowed to read the information they are authorized to read.

  • Integrity: “No unauthorized modification (write or delete)” - users are only allowed to modify the information they are authorized to modify.

  • Availability: “Keeps working in the presence of attack” - the software keeps working while under attack.

  • Non-repudiation or accountability: If someone takes specific actions, the system should be able to prove it, even if the person involved denies it later.

The security objectives need some supporting mechanisms such as:

  • Identity & Authentication (I&A): Require users to identify themselves and prove (authenticate) their identity before doing anything that requires authorization.

  • Authorization: Determine what that user is allowed (authorized) to do before deciding to do it.

  • Auditing (aka logging): Record essential events to help detect and recover from attacks. Auditing is often critical for implementing non-repudiation/accountability requirements.

Privacy

The International Association of Privacy Professionals (IAPP) defines privacy as “the right to be let alone, or freedom from interference or intrusion”. More specifically, it says, “Information privacy is the right to have some control over how your personal information is collected and used… various cultures have widely differing views on what a person’s rights are regarding privacy and how it should be regulated.”

They also contrast privacy and security: “Data privacy is focused on the use and governance of personal data—things like putting policies in place to ensure that consumers’ personal information is being collected, shared and used in appropriate ways.”

The most straightforward approach to privacy is not to collect information about individuals unless you specifically need it. If you do not collect the information, you cannot divulge it later, and you do not have to determine how to prevent its misuse. Eliminating it entirely is best from a privacy point of view.

GDPR

The European General Data Protection Regulation (GDPR) protects the personal data of subjects in the European Union (EU). It applies whether or not the data processing occurs within the EU and whether or not the subjects are European citizens. As a result, the GDPR applies in many circumstances. The Linux Foundation has a summary of the GDPR that highlights issues important to software developers.

Telemetry

Software sometimes includes functionality to collect telemetry data about the software’s use or performance. Telemetry data is often collected through a “phone home” mechanism built into the software, which sends this data elsewhere.

Telemetry data is especially fraught with privacy and confidentiality issues. End users are typically presented with an option to opt-in to share statistical data with the software developers, but that agreement may not be adequate. Ideally, end users should be fully aware of what data may be sent to the vendor or other third party when using the software and their ability to control that data transfer.

The Linux Foundation’s Telemetry Data Collection and Usage Policy presents a brief discussion of some of the issues that should be considered before implementing telemetry data collection, as well as discussing the Foundation’s approach to managing the use of telemetry by its open-source project communities.

Risk Management

Risks are potential problems. The key to developing adequately secure software is to manage the risks of developing insecure software, before they become problems.

The Failure of Risk Management: Why It’s Broken and How to Fix It defines risk management as the “identification, evaluation, and prioritization of risks… followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events”.

US Department of Defense Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs divides risk management into the following activities

  • Risk Planning: Determine your projects’ risk management process.

  • Risk Identification: Identify what might go wrong. A good trick is to look for similar projects - what risks and problems did they have? Writing this list down is a good idea so it can be shared. For our purposes, we are concerned about security-related risks.

  • Risk Analysis: Determine the two key attributes of risk: the likelihood of the undesirable event and the severity of its consequences. A risk becomes increasingly important if its likelihood and/or severity increases.

  • Risk Handling: Determine what we will do about the risk. There are several options for each risk:

    • Acceptance (& Monitoring): The risk is accepted but monitored and communicated to its stakeholders (including its users). This is reasonable if the likelihood or severity is low.

    • Avoidance: The risk is eliminated by making some change, such as its likelihood is zero or severity irrelevant. For example, choose a programming language where certain vulnerabilities cannot happen (eliminating the risks from those vulnerabilities).

    • Transfer: The risk is transferred to someone else, e.g., by buying insurance or changing the system so that another component has that risk and its developers accept it.

    • Control: Actively reduce the risk to an acceptable level. Since the importance of risk depends on its likelihood and severity, this means changing things to make the likelihood and/or severity low (or at least lower). For example, we might:

      • Ensure all developers know about certain kinds of common mistakes that lead to a particular kind of vulnerability (so that they can avoid them),

      • Use approaches (such as secure design, specific programming languages, and APIs) that are designed to make those vulnerabilities less likely,

      • Use tools & reviews to catch mistakes (including vulnerabilities), and

      • Harden the system. Hardening a system means modifying a system so that defects are less likely to become security vulnerabilities.

  • Risk Monitoring: Determine how the risks have changed over time. Over time, you should “burn down” your risks - that is, the steps you are taking should be continuously reducing the risk likelihood or severity to acceptable levels.

Bruce Schneier in The Process of Security, has said, “security is a process, not a product… there’s no such thing as perfect security. Interestingly enough, that’s not necessarily a problem. … security does not have to be perfect, but the risks have to be manageable…”

Checklists Are Not Security: Do not equate checklists, guidelines, and tips with security. They are often helpful because they can help you identify risks and reasonable ways to handle them. Good checklists, guidelines, and guidance can save you time and trouble. They are also excellent aids for helping others evaluate the security of some software.

Development Processes/Defense-in-Breadth: Individual Software Development & Deployment Processes

Whenever you develop software, there are specific processes that all developers have to do. These include:

  • Determine requirements (what the software must do). Make sure we know what security requirements it needs to provide.

  • Determine architectural design (how to divide the problem into interacting components to solve it).

  • Select reusable components (decide reusable packages/libraries). We must evaluate the components used since any of their vulnerabilities may become vulnerabilities of the software we are developing. These reused components come from somewhere and depend transitively on other components. The supply chain is the set of all those dependencies, including where they come from and how they eventually get to the developed software.

  • Implement it (write the code). Most security vulnerabilities made during implementation are specific common kinds; once we know what they are, we can avoid them.

  • Verify it (write/implement tests and use analyzers to gain confidence that it does what it is supposed to). Test to ensure the system is secure, and use tools to find vulnerabilities before attackers find them.

  • Deploy it. Ensure that users can get the correct version, that it secure by default, and can be easily operated securely.

Mistakes

  • A common mistake is to try to execute these software development processes in a strict sequence (figure out all the requirements, then work out the entire design, then implement the entire system, then verify it). Attempting to create software in this strict sequence is called the waterfall model.

  • Another common mistake is implementing software components independently and only integrating and testing them together once everything is completed independently. This is typically a mistake because this leads to severe problems getting the components to work together.

Recommendation

A highly recommended practice is to use Continuous Integration (CI), frequently merging working copies of development into a shared mainline (e.g., once every few days through many times a day). This routine merging reduces the risks of components not working together if integration is delayed until later, which is good . However, successful CI requires determining if the components are working together. This is resolved by using a CI pipeline - a process that runs whenever something is merged to ensure that it builds and passes a set of automated tests and other checks.

  • Continuous Integration, Delivery and Deployment: A Systematic Review on Approaches, Tools, Challenges and Practices defines Continuous Delivery (CDE) aims to ensure “an application is always at production-ready state after successfully passing automated tests and quality checks [by employing practices] to deliver software automatically to a production-like environment.”

  • Continuous Deployment (CD) “goes a step further [than continuous delivery] and automatically and continuously deploy the application to production or customer environments.”

  • Revisiting “What Is DevOps” says DevOps focuses on coordination and cooperation between the software development (Dev) and IT operations (Ops) teams, e.g., to shorten development and deployment time.

  • What is DevSecOps? says DevSecOps (also called SecDevOps) is DevOps, but specifically integrating security concerns into the development and operations process.

All these depend on automated tests and quality checks, and from a security perspective, what is critical is that tools to check for security vulnerabilities and potential security issues need to be integrated into those automated tests and quality checks. For example, ensure that tools in the CI pipeline check for various security issues to detect any security problems early.

Simply inserting some “security tools” into an automated test suite, by itself, tends to be ineffective. Security tools will not generally know what the software is supposed to do (the requirements). For example, security tools will not know what information is confidential. Security tools usually cannot detect fundamental problems in software design. Even if they could, fixing design problems differs from what detection tools do. Security tools often miss vulnerabilities, especially if the software is poorly designed. Most importantly, information from security tools generally does not make sense to developers if they do not have a basic understanding of security. An old phrase remains accurate: “A fool with a tool is still a fool”.

Protect, Detect, Respond

US NIST Cybersecurity Framework identifies five concurrent and continuous functions organizations should apply in their operations to manage cybersecurity risk:

  • Identify: “Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities”.

  • Protect: “Develop and implement appropriate safeguards to ensure delivery of critical services”.

  • Detect: “Develop and implement appropriate activities to identify the occurrence of a cybersecurity event”.

  • Respond: “Develop and implement appropriate activities to take action regarding a detected cybersecurity incident”.

  • Recover: “Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident”.

Vulnerabilities

A vulnerability is simply a failure to meet some security requirements. Typically, vulnerabilities are unintentional, but vulnerabilities can be intentional.

Reporting and Handling Vulnerabilities - A Brief Summary

Security researchers make finding vulnerabilities a part of their career.

Usually, these vulnerability finders report the vulnerability to the software supplier(s) through a “timed coordinated disclosure.” process. The finders privately report the vulnerability to the supplier(s), giving the supplier(s) some limited time (called the “embargo time”) to fix the vulnerability. After this embargo time (typically 14-90 days), or when the vulnerability has been fixed, and users have had an opportunity to install the upgraded software version, the vulnerability is publicly disclosed. Sometimes this process is just called “coordinated disclosure”. However, the vulnerability will be publicly disclosed if the supplier fails to fix it promptly.

In practice, things are more complicated. Often there are multiple suppliers and other stakeholders involved. It is critically important that a developer/supplier prepare ahead of time so that people can easily report vulnerabilities, privately discuss the issue with trusted parties, and rapidly fix any issues. In addition, there is so much software and so many vulnerabilities that there is a need to track vulnerabilities. This need for tracking led to the creation of Common Vulnerabilities and Exposures (CVE).

Common Vulnerabilities and Exposures (CVEs)

Common Vulnerabilities and Exposures (CVE) is a global dictionary of (some) publicly disclosed cybersecurity vulnerabilities. The goal of CVE is to make it easier to share data about vulnerabilities. A CVE entry has an identification number (ID), description, and at least one public reference. CVE IDs have the form CVE-year-number, where the year is the year it was reported. The number is an arbitrary positive integer to ensure that CVE IDs are unique. There are databases, such as the US National Vulnerability Database (NVD), that tracks the current public set of CVE entries.

CVEs are assigned by a CVE Numbering Authority (CNA). A CNA is simply an organization authorized to assign CVE IDs to vulnerabilities affecting products within some scope defined in advance. The primary CNA (aka “CNA of last resort”) can assign a CVE even if no one else can (MITRE currently fills this role). Many CNAs are software product developers (such as Microsoft and Red Hat) who assign CVE numbers for their products. There are also third-party coordinators for vulnerabilities, such as the CERT Coordination Center, who are CNAs. Each CNA is given a block of integers to use in CVEs. This means that CVE-2025-50000 does not mean that it is vulnerability number 50,000 in 2025, but merely that the CNA assigned that CVE ID was authorized to assign 50,000 in 2025.

Many publicly-known vulnerabilities do not have CVE assignments. First of all, CVEs are only assigned if someone requests an assignment from a CNA; if no request is made, there will be no CVE. In addition, CVEs are intentionally limited in scope. CVEs are only granted for publicly released software (including pre-releases if they are widely used). CVEs are generally not assigned to custom-built software that is not distributed.

Top kind of vulns

The vast majority of vulnerabilities can be grouped into categories. That turns out to be very useful; once we identify categories, we can determine which ones are common and what steps we can take to prevent those vulnerabilities from reoccurring.

The Common Weaknesses Enumeration (CWE) is a long list of common weaknesses. In the CWE community’s terminology, a “weakness” is a category (type) of vulnerability. Note the difference between CVE and CWE: A CWE identifies a type of vulnerability, while a CVE identifies a specific vulnerability in a particular (family of) products. Each CWE has an identifier with a number, e.g., CWE-20.

People have identified the most critical or top kinds of vulnerabilities in their likelihood and severity. Two of the most popular lists of vulnerabilities are:

  • OWASP Top 10 Web Application Security Risks: This list, developed by the Open Web Application Security Project (OWASP), represents a “broad consensus about the most critical security risks to web applications.”

  • CWE Top 25 List lists the most widespread and critical vulnerabilities. The Common Weaknesses Enumeration (CWE) Team created it by analyzing data about publicly-known vulnerabilities over many years. This list can be applied to any software. Still, it is widespread to apply it to software that is not a web application (since the OWASP list focuses on web applications). One interesting quirk: they identify significant weaknesses beyond the first 25, so you can see numbers larger than 25 associated with this list.

OWASP has other top 10 lists for different kinds of software. For example:

  • OWASP Mobile Top 10 - the mobile applications top 10

  • OWASP Internet of Things Project - the Internet of Things (IoT) top 10.

Secure Design Principles

What Are Security Design Principles?

When you write non-trivial software, you have to break the problem into smaller components that work together. This process of deciding how to break a problem into components and how they will work together is called design or architectural design.

Design principles are broadly accurate guides based on experience and practice. Put another way, design principles are rules of thumb for helping us avoid a bad design and guiding you to a good design instead. Secure design principles do not guarantee security, though; they are an aid to thinking, not a replacement for thinking.

When thinking about our design, we must consider what components we can trust (and how much) and cannot necessarily trust. Some design principles talk about a trust boundary. The trust boundary is simply between the trusted components and the non-trusted ones. Where the trust boundary depends on what software we are developing:

  • If we are writing a server-side application, we presumably trust what we are running on (e.g., the computer, operating system, and container runtime if there), but not the external client systems (some of which might be controlled by an attacker). The trust boundary is between the server and the clients.

  • If you are writing a mobile (smartphone) application that talks to a server you control, you presumably trust that remote server. We should not trust the communication path between your mobile application and server (so you will want to use TLS to encrypt it). We certainly should not trust other applications on the smartphone, unless we have a particular reason to trust one. So clearly, there is a boundary between your mobile application and (1) the general Internet and (2) other mobile applications. Trust is often not absolute; we probably trust that the mobile smartphone operating system will run for that user, but that user might be an attacker, so we should probably ensure that some secrets never get into the mobile application.

The Protection of Information in Computer Systems says

  • Least privilege: Each (human) user and program should operate using the fewest privileges possible. Several ways to implement the least privilege

    • Don’t give a program any special privileges (where practical)

    • Minimize the special privileges a program gets, including minimizing whatever data is accessible to it

    • Permanently give up privileges as soon as possible

    • If you cannot permanently give up privileges, try to minimize the time the privilege is active

    • Break the program into different modules, and give special privileges to only one or a few modules (portions of the program)

    • Minimize (limit) the attack surface

    • Don’t just accept data from a potential attacker; check it thoroughly before accepting it.

    • Sandbox your program

    • Minimize privileges for files & other resources

  • Complete mediation (aka non-bypassability): Every access attempt must be checked; position the mechanism so it cannot be subverted. A synonym for this goal is non-bypassability.

  • Economy of mechanism (aka simplicity): The system, particularly the part that security depends on, should be as simple and small as possible.

  • Open design: The protection mechanism must not depend on attacker ignorance. Instead, we should act as if the mechanism is publicly known and depend on the secrecy of relatively few easily changeable items like passwords or private keys. An attacker should not be able to break into a system just because the attacker knows how it works. “Security through obscurity” generally does not work.

  • Fail-safe defaults: The default installation should be the secure installation. If it is not certain that something should be allowed, don’t allow it.

  • Separation of privilege (e.g., use two-factor authentication): Access to objects should depend on multiple conditions (such as having a password). That way, if an attacker manages to break one condition (e.g., by stealing a key), the system remains secure. Note: Sometimes programs are broken into parts, each with a different privilege. This approach is sometimes confusingly called “privilege separation.” - but breaking a program into parts with different privileges is something else. In this terminology, that is an example of least privilege.

  • Least common mechanism (aka minimize sharing): Minimize the amount and use of shared mechanisms. Avoid sharing files, directories, operating system kernel execution, or computers with something you do not trust because attackers might exploit them.

  • Psychological acceptability (aka easy to use): The human interface must be designed for ease of use, so users will routinely and automatically use the protection mechanisms correctly.

Reused software

“Reused software” includes all the software we depend on when the software runs, also known as its dependencies. There are many important things to consider when selecting reusable software.

  • Is it easy to use securely? If something is hard to use securely, the result is far more likely to be insecure.

  • Is there evidence that its developers work to make it secure?

  • Is it maintained? Unmaintained software is a risk. - If the software is OSS, we can look at its repository and see its commit history. - Are there recent releases or announcements from its developer?

  • Does it have significant use?

  • What is the software’s license? Licenses are technically not security, but licenses can significantly impact security.

  • If it is essential, what is our evaluation of it? If the software is important to us, especially OSS, you can download and examine it yourself. If we decide that we want to do just a brief review, here are things to consider:

    • When we review the more detailed artefacts (e.g., the source code), is there evidence that the developers were trying to develop secure software (such as rigorous input validation of untrusted input and the use of prepared statements)?

    • Is there evidence of insecure or woefully incomplete software (such as a forest of TODO statements)?

    • What are the “top” problems reported when running the software through static analysis tools (that examine the code to look for problems)?

    • Is there evidence that the software is malicious? The authors of the Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks article note traits that is especially common in malicious packages: Most malicious packages perform malicious actions during installation (so check the installation routines), most aim at data exfiltration (so check for extraction and sending of data like ~/.ssh or environment variables), and about half use some sort of obfuscation (so look for encoded values that end up being executed).

  • Most software depends on other software, which often depends on other software with many tiers. A software bill of materials (SBOM) is a nested inventory that identifies the components that comprise a larger piece of software. Many ecosystems have ecosystem-specific SBOM formats. Some SBOM formats support arbitrary ecosystems: Software Package Data Exchange (SPDX), Software ID (SWID), and CycloneDX. When an SBOM is available for a component we are considering using; it’s often easier to use that data to help answer some of the questions listed above. It’s also good to provide an SBOM to potential users of our software for the same reasons.

Downloading and Installing Reusable Software

  • Make sure we have exactly the correct name. A common attack is called “typosquatting”. In typosquatting, an attacker will create a domain name or package name that is intentionally and maliciously similar to a widely-used software component, and use that misleading name to spread a malicious version of that software

    • Check for common misleading name changes. It is easy to switch between dash (-) and underscore (_). One (1) and lower-case L (l) look similar, as do zero (0) and capital O (O).

    • Check how popular the package is. Generally, the more popular version is the correct version. If we are using a package manager, compare the download counts of similarly-named packages; the ones with lower counts may be typosquatting attacks.

  • Make sure to download and install the software in a trustworthy way:

    • Directly download the software from its main site or from a redistribution site that we have good reason to trust (such as your Linux distribution’s repository or programming language package manager’s standard repository).

    • Typically, this means that we should use https: (TLS) to download the software, not http:, since this generally ensures that we are contacting the site we requested and preventing attackers from modifying the software en route to you.

    • Try to avoid using pipe-to-shell (such as curl … | sh) to download and install the software.

    • Where important and practical, try to verify that the package is digitally signed by its expected creators (or at least its re-distributors).

In practice, we will have many reused software components that need to be updated occasionally. Sometimes a vulnerability will be found in one, in which case we need to be notified quickly and be prepared to update rapidly. As a result, we need to manage reused components:

  • Use package managers, version control systems (such as git), build tools, and automated tests so that we can quickly determine exactly what versions you have of every reused component and can rapidly update any of them.

  • Only depend on documented interfaces and behaviour, and avoid obsolete interfaces, to maximize the likelihood of being able to update reused software when necessary. Expect to update the software we use, including your underlying platform.

  • It is foolish to assume that software will never need to be rapidly updated. Do not modify OSS and create your own “local fork”. Suppose a vulnerability is fixed in a later version of that OSS. In that case, it will become increasingly difficult to incorporate that fix. Instead, if you need to modify some OSS to fit your needs, work with the original upstream OSS project to incorporate your improvements into the official version. Then newer versions of that OSS, including ones that fix vulnerabilities, will also include the necessary capabilities.

  • Keep your reused software relatively up-to-date. Suppose your reused components go very far out-of-date. In that case, replacing a vulnerable version with a fixed version may be very difficult. Monitor to determine if any of the software versions has had a publicly-known vulnerability discovered.

Secure software

By secure software, we mean software:

  • that is much harder for attackers to exploit,

  • that limits damage if exploitation is successful, and

  • where vulnerabilities can be fixed and exploitations partially recovered from relatively quickly.

Good material

Verification

Verification can be defined as determining whether or not something complies with its requirements (including regulations, specifications, and so on).

There are two main technical categories of verification:

  • Static analysis is an approach for verifying software (including finding defects) without executing software. This includes tools that examine source code, looking for vulnerabilities (e.g., source code vulnerability scanning tools). It also includes humans reading code and looking for problems.

  • Dynamic analysis is an approach for verifying software (including finding defects) by executing software on specific inputs and checking the results. Traditional testing is a kind of dynamic analysis. Fuzz testing, where many random inputs are sent to a program to see if it does something it should not, is also an example of dynamic analysis.

Analysis/ Tool Report

Report Correct

Report Incorrect

Reported (a defect)

True positive (TP): Correctly reported (a defect)

False positive (FP): Incorrect report (of a “defect” that is not a defect) (“Type I error”)

Did not report (a defect (there))

True negative (TN): Correctly did not report (a given defect)

True negative (TN): Correctly did not report (a given defect)

Refer to SATE V Report: Ten Years of Static Analysis Tool Expositions

Generic Bug-Finding Tools: Quality Tools, Compiler Warnings, and Type-Checking Tools

If we are starting a new project, it is essential to turn on as many of these tools (including compiler warnings).

Refer

The idea behind these tools is that many vulnerabilities have specific patterns. A tool designed to look for those patterns can report similar vulnerabilities.

Static analysis

The kind of analysis these tools do has a variety of names, including software composition analysis (SCA), dependency analysis, and origin analysis.

Software Composition Analysis (SCA)/Dependency Analysis

A key part of preparation is to use a tool that can determine what software we reuse and report on any publicly-known vulnerabilities in those reused components.

It is far better to apply some good practices. First, when reusing software, use a package manager to manage it, one that records the specific version numbers in a standard format that can record in the version control system.

Speed is essential when a component we depend on has a publicly-known vulnerability; we know this will happen sometimes. So, trying to handle this entirely manually is a mistake. We should instead make sure that:

  • We have at least one SCA tool that automatically reports when there is a known vulnerability in system’s component.

  • We can quickly update a component using a simple command by telling a package manager to switch to a different component version and check that change.

  • We can automatically test the modified configuration to ensure that updating the component does not break anything important.

  • We can quickly deploy it (if you deploy directly) and/or distribute it (if we distribute the software to others).

There are lots of SCAs available.

  • If you use GitHub or GitLab, they provide some basic SCA reporting of known vulnerabilities in many components for free (assuming we use a standard package management format they can process). Linux Foundation projects can use LFx which provides this service.

  • There are a variety of suppliers that provide or sell such tools. This includes OWASP Dependency Check (OSS), Sonatype’s Nexus products, Synopsys’ Black Duck, Ion Channel Solutions, and Snyk. Some package managers include this capability or have a plug-in (e.g. Ruby’s bundler has bundle-audit).

Dynamic Analysis

Dynamic analysis is an approach for verifying software (including finding defects) by executing software on specific inputs and checking the results.

Traditional testing

The best-known dynamic analysis approach is traditional testing. We select specific inputs to send to a program and check whether the result is correct. We can test specific program parts, such as a method or function (unit testing). You can also send sequences of inputs to the system integrated as a whole (called integration testing). Most people combine unit and integration testing. Unit testing is fast, and it can be easy to test many special cases, but unit testing often misses whole-system problems that integration testing is much more likely to detect.

If your software needs to work correctly, it is critically important that we have a good test suite of automated tests and apply that test suite in your continuous integration pipeline. By good, we mean “relatively likely to detect serious problems in the software”. While this does not guarantee no errors, a good test suite dramatically increases the probability of detection. It is essential for detecting problems when upgrading a reused component.

If we deliver software and a defect is later found and fixed, for each fix, we should think about adding another test for that situation. Often, defects that escape to the field indicate a subtle mistake that might reoccur in a future version of the system. In that case, add test(s) so that if that problem recurs, it will be detected before releasing another version.

If we are contracting someone else to write (some of) your software, and we do not want to be controlled by them later; we need to make sure that you not only get the application source code (and the rights to modify it further), but also get all the build instructions and tests necessary to be able to change the software confidently. After all, if we cannot easily build or test a software modification; there is no safe way to make modifications and ship it.

In theory, we can create manual tests, that is, write a detailed step-by-step manual procedure and have a human follow those test steps.

In practice, manual tests are almost always “tests that won’t be done” because of their high costs and delay. Another problem with manual testing is that it discourages continuous testing since it costs time and money to do those manual tests. So, avoid manual testing in favour of automated testing where practical. In some cases, we may need to do manual testing, but remember that every manual test is a test that will rarely (if ever) be done, making that test far less useful. Note that what we are describing as manual tests are different from undirected manual analysis (where humans use the software without a step-by-step process). Undirected manual analysis can be quite effective but is entirely different from manual tests.

A tricky problem in testing is when a resource is not available. If the test requires some software, hardware, or data that we don’t have, we cannot directly test it. Typically, the best you can do in those cases is simulate it (e.g., with mocked software, simulated hardware, or a stand-in dataset). If that is the best you can do, it is usually worthwhile. But don’t confuse the simulation with reality; the test results may be misleading due to differences between the actual resource and its stand-in.

From a security perspective, including tests for security requirements is essential. In particular, test “what should happen” and “what should not happen”. Often, people forget to test what should not happen (aka negative testing).

One approach to developing software is called test-driven development (TDD). To over-summarize, in TDD, the tests for a new capability are written before the software implements the capability. This has some advantages; in particular, it encourages writing practical tests that check what they are supposed to check and developing testable software.

Test coverage

We can always write another test; how do we know when we have written enough tests? It takes time to create and maintain tests, and tests should only be added if they add value. This turns out to be a complex question, and much depends on how critical our software is.

Two simple measurements that can help answer this question are statement coverage and branch coverage:

  • Statement coverage is the percentage of program statements that have been run by at least one test.

  • Branch coverage is the percentage of branches that have been taken by at least one test.

    • In an if-then-else construct, the then part is one branch, and the else part is the other branch.

    • In a loop, the run the body part is one branch and do not run the body is the other branch.

    • In a switch (case) statement, each possibility is a branch.

  • Statement coverage and branch coverage combine dynamic analysis (test results) with static analysis (information about the code), so it is sometimes considered a hybrid approach

  • As a rule of thumb, we believe that an automated test suite with less than 90% statement coverage or less than 80% branch coverage (overall automated tests) is a poor test suite.

  • These test coverage measures warn about statements and branches that are not being tested, and that information can be really valuable. From a security standpoint, coverage measures warn about statements or branches that are not being run in tests, which suggests that some essential tests are missing or the software is not working correctly.

Fuzz testing

Fuzz testing is a different kind of dynamic analysis.

In fuzz testing, we generate many random inputs, run the program, and see if the program misbehaves (e.g., crashes or hangs). A key aspect of fuzzing is that it does not generally check if the program produces the correct answer; it just checks that certain reasonable behaviour (like “does not crash”) occurs.

Fuzzers can be useful for finding vulnerabilities.

  • If we use one, it is often wise to add and enable program assertions. This turns internal state problems - which might not be detected by the fuzzer - into a crash, which a fuzzer can easily detect.

  • If we are running a C/C++ program, you should consider running a fuzzer in combination with address sanitizer (ASAN) - ASAN will turn some memory access problems that would typically quietly occur into a crash, and again, this transformation improves the fuzzer’s ability to detect problems.

  • If we manage an OSS project, you might consider participating in Google’s OSS-Fuzz project. OSS-Fuzz applies fuzzing in combination with various sanitizers to try to detect vulnerabilities. The Fuzzing Project encourages/coordinates applying fuzz testing to open-source software.

A web application scanner (WAS), also called a web application a vulnerability scanner essentially pretends it is a simulated user or web browser and tries to do many things to detect problems.

Dynamic Application Security Testing

Dynamic Application Security Testing, or DAST has a lot of variations:

Penetration Testing

A penetration test (aka pen test) simulates an attack on a system to try to break into (penetrate) the system. The people doing a penetration test are called penetration testers or a red team; they may be actively countered by a defensive team (also called a blue team). The point of a penetration test is to learn about weaknesses so they can be strengthened before an actual attacker tries to attack the system.

Security Audit

A security audit reviews a system to look for vulnerabilities. Often, the phrase is used to imply a more methodical approach, where designs and code is examined to look for problems. But that is not always true; the terms security audit and penetration test are sometimes used synonymously.

The Core Infrastructure Initiative (CII) Best Practices badge identifies a set of best practices for open-source software (OSS) projects. There are three badge levels: passing, silver, and gold. Each level requires meeting the previous level; gold is especially difficult and requires multiple developers.

Threat Modelling

Threat modeling is the process of examining your requirements and design to consider how an attacker might exploit or break into your system, so that you can try to prevent those problems in the first place. Threat modeling generally focuses on larger systems, where there are clear trust boundaries.

There are many different ways to do threat modeling. For example, where do you start? Different approaches might emphasize starting with:

  • The attacker (what are the attacker’s goals? capabilities? way of doing things?)

  • The assets to be protected

  • The system design.

A related problem is how to do this kind of analysis. Some people create a set of attack trees. Each tree identifies an event an attacker tries to cause, working backwards to show how the event could happen (hopefully, you will show that it cannot happen or is exceedingly unlikely).

Refer:

Microsoft Threat Modelling

  • Define security requirements.

  • Create an application diagram.

  • Identify threats.

  • Mitigate threats.

  • Validate that threats have been mitigated.

S2: Create an application diagram

When applying STRIDE in step 2, you need to create a simple representation of your design. Typically, this is done by creating a simple data flow diagram (DFD) (Refer Threat Modeling: 12 Available Methods):

  • Data processes are represented with circles.

  • Data stores are represented with lines above and below their names (you may also see them as cylinders).

  • Data flows are represented with directed lines; these include data flows over a network Interactors (items that are outside your system and interact with it) typically have simple icons, such as a stick figure for a human Trust boundaries are represented with a dashed line; these represent the border between trusted and untrusted portions.

  • Elements are everything except the trust boundaries. That is, processes, data stores, data flows, and interactors are all elements.

The idea is to have a simple model of the design that shows the essential features. Here are some quick rules of thumb for a good representation:

  • Every data store should have at least one input and at least one output (“no data coming out of thin air”).

  • Only processes read or write data in data stores (“no psychokinesis”).

  • Similar elements in a single trust boundary can be collapsed into one element (“make the model simple”).

S3: Identify threats

Then, when applying STRIDE in step 3, you examine each of the elements (processes, data stores, data flows, and interactors) to determine what threats it is susceptible to.

Cryptography

For normal software development, there are three key rules for cryptography:

  • Never develop your own cryptographic algorithm or protocol. Creating these is highly specialized. To do a good job, you need a PhD in cryptography, which, in turn, requires advanced college mathematics. Instead, find out what has been publicly vetted by reputable cryptographers and use that.

  • Never implement your cryptographic algorithms or protocols (if you have an alternative). There are a large number of specialized rules for implementing cryptographic algorithms that do not apply to normal software and are thus not known to most software developers. Tiny implementation errors of cryptographicalgorithms often become massive vulnerabilities. Instead, reuse good implementations where practical.

  • Cryptographic systems (such as algorithms and protocols) are occasionally broken. Make sure the ones you choose are still strong enough, and make sure you are prepared to replace them.

Symmetric/Shared Key Encryption Algorithms

A symmetric key or shared key encryption algorithm takes data (called cleartext) and a key as input, and produces encrypted data (called ciphertext). It can also go the other way: using the ciphertext and the same key, it can produce the corresponding cleartext.

Many symmetric key algorithms, including AES, are what’s called block algorithms. With block algorithms, you must also choose a mode to use. Here is the most important rule about modes: Never use Electronic Code Book (ECB) mode!

Cryptographic Hashes (Digital Fingerprints)

Some programs need a one-way cryptographic hash algorithm, that is, a function that takes an arbitrary amount of data and generates a fixed-length number with special properties. The special properties are that it must be infeasible for an attacker to create:

Another message with a given hash value (preimage resistance) Another (modified) message with the same hash as the first message (second preimage resistance) Any two messages with the same hash (collision resistance).

Public-Key (Asymmetric) Cryptography

A public key or asymmetric cryptographic system uses pairs of keys. One key is a private key (known only to its owner) and the other is a public key (which can be publicly distributed).

These algorithms can be used in one or more ways (depending on the algorithm), including:

Encryption

Anyone could encrypt a message using a public key, but only someone with the corresponding private key could decrypt it. Public key encryption algorithms are generally relatively slow, so, in many situations, a key for a shared-key algorithm is encrypted, and the rest of the message is encrypted with a shared key.

Digital signatures (authentication)

A sender can use a public key algorithm and their private key to provide additional data called a digital signature; anyone with the public key can verify that the sender holds the corresponding private key. Key exchange There are public key algorithms that enable two parties to end up with a shared key without outside passive observers being able to determine the key.

Refer Stop using RSA

A whole family of algorithms are called elliptic curve cryptography; these are algorithms that are based on complex math involving elliptic curves. These algorithms require far shorter key lengths for equivalent cryptographic strength, and that is a significant advantage.

The Digital Signature Standard (DSS) is a standard for creating cryptographic digital signatures. It supports several underlying algorithms: Digital Signature Algorithm (DSA), the RSA digital signature algorithm, and the elliptic curve digital signature algorithm (ECDSA).

There are also a variety of key exchange algorithms. The oldest is the Diffie-Hellman key exchange algorithm. There is a newer key exchange algorithm based on elliptic curves, called Elliptic Curve Diffie-Hellman (ECDH).

Cryptographically Secure Pseudo-Random Number Generator (CSPRNG)

Many algorithms depend on secret values that cannot be practically guessed by an attacker. This includes values used by cryptography algorithms (such as nonces), session ids, and many other values. If an attacker can guess a value, including past or future values, many systems become insecure.

There are many pseudo-random number generator (PRNG) algorithms and implementations, but, for security, you should only use PRNGs that are cryptographically secure PRNGs (CSPRNGs). CSPRNGs are also called cryptographic PRNGs (CPRNGs). A good CSPRNG prevents practically predicting the next output given past outputs (at greater than random chance) and it also prevents revealing past outputs if its internal state is compromised.

Software developers for IoT devices should not access the hardware registers directly, but should instead call well-crafted CSPRNG generators that correctly use hardware sources (preferably multiple sources) as inputs into their internal entropy pool. In most cases, IoT developers should use an IoT operating system that includes a CSPRG implementation that is correctly seeded from multiple hardware sources, and simply check to see if it appears to be carefully written for security. Where that’s not practical, use a well-crafted and analyzed CSPRNG library that includes correct software to extract random values from your hardware; do not implement your own crypto unless you’re an expert in cryptography. IoT software developers should also run statistical tests on their random number generation mechanism to ensure that they’re random, because this is an especially common problem in IoT devices. Refer You’re Doing IoT RNG

Make sure you use a strong, properly-implemented cryptographically secure pseudo-random number (CSPRNG) generator, seeded with multiple hardware values, every time you need a value that an adversary cannot predict.

Storing Passwords

A common need is that you are implementing a service and/or server application, and you need the user to authenticate and/or prove that they are authorized to make a request. This is called inbound authentication. Here are three common approaches for doing this:

  • Delegate this determination to some other service. You need to trust that other service, and you need a specification for communicating this. OAUTH and OpenID are two common specifications for making the request to the other service. Generally, you would call on a routine to implement this; make sure you apply its security guidance. This can be convenient to users, but remember that this reveals every login to that external service (a privacy concern), and make sure you can trust that service.

  • Require the requestor to have a private key that proves their identity. SSH and HTTPS both support this. A great advantage of this approach is that, at the server end, only a public key needs to be recorded, so, while integrity is important, the confidentiality of the keys is not as critical. However, this requires that the user set up this private key. Support a password-based login (at least in part).

  • If you are using passwords for inbound authentication, for security, you must use a special kind of algorithm for this purpose called an iterated per-user salted cryptographic hash algorithm. The term “iterated” is also called key derivation. Three algorithms are commonly used as an iterated per-user salted cryptographic hash algorithm:

    • Argon2id: Unless you have a strong reason to use something else, this is the algorithm to use today. It is relatively strong against both software and hardware-based attacks.

    • Bcrypt This is a decent algorithm against software-based attacks. It is not as easy to attack with hardware, compared to PBKDF2 (because bcrypt requires more RAM), but it is weaker against hardware-based attacks compared to Argon2id.

    • PBKDF2: This is a decent algorithm against software-based attacks, but it is the most vulnerable of these widely-used algorithms to hardware-based attacks from specialized circuits or GPUs. That is because it can be implemented with a small circuit and little RAM. You may not need to replace it (depending on the kinds of attackers that concern you), but it is probably best to avoid this for new systems today.

    • Another algorithm that is in use is scrypt. This should also be strong against hardware attacks, but it has not gotten as much review compared to Argon2id.

  • You should allow users to require the use of two-factor authentication (2FA), either directly or by delegating to a service that does.

Transport Layer Security

Transport Layer Security (TLS) is a widely-used cryptographic protocol to provide security over a network between two parties. It provides privacy and integrity between those parties.

  • To use TLS properly, the server side at least needs a certificate (so it can prove to potential clients that it is the system it claims to be). You can create a certificate yourself and install its public key on each client (e.g., web browser) who will connect to that server. That is fine for testing, but in most other situations, that is too complicated. In most cases (other than testing), you should get a certificate assigned by a certificate authority. You can get free certificates from Let’s Encrypt.

  • When clients connect to a server using TLS, the client normally needs to check that the certificate is valid. Web browsers have long worked this out; web browsers come with a configurable set of certificate authority public keys (directly or via the operating system) and automatically verify each new TLS connection.

  • Beware: If you are using your own client, instead of using a web browser, double-check that you are using the TLS library API correctly. Many TLS library APIs do not fully verify the server’s TLS certificate automatically. For example, they may allow connections to a server when there is no server certificate, they may allow any certificate (instead of a certificate for the site you are trying to connect to), or allow expired certificates. This is an extremely common mistake (The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software, by Martin Georgiev, 2012). If this is the case, you may be using a low-level TLS API instead of the API you should be using.

Ciphersuites

TLS, as a protocol, combines many of the pieces we have discussed. At the beginning of communication, the two sides must negotiate to determine the set of algorithms (including key lengths) that will be used for its connection. This set of algorithms is called the ciphersuite. That means that, for security, it is important to have good default configurations and to have the software configured correctly when deploying it.

Perhaps most important, however, are the key pieces of advice: do not create your own cryptographic algorithms or protocols, and do not create your own implementations. Instead, reuse well-respected algorithms, protocols, and implementations. When configuring cryptography, look for current well-respected advice. Examples of such sources include Mozilla’s Security/Server Side TLS site, NIST (especially NIST’s Recommendation for Key Management: Part 1 - General), and CISCO’s Next Generation Cryptography.

Constant Time Algorithms

Constant-time algorithms, especially constant-time comparisons. Many algorithms take a variable amount of time depending on their data. For example, if you want to determine if two arrays are equal, usually that comparison would stop on the first unequal value.

The normal comparison operations (such as is-equal) try to minimize execution time, and this can sometimes leak timing information about the values to attackers. If an attacker could repeatedly send in data and notice that a comparison of a value beginning with “0” takes longer than one that does not, then the first value it is compared to must be “0”. The attacker can then repeatedly guess the second digit, then the third, and so on.

Constant-time comparisons are comparisons (usually equality) that take the same time no matter what data is provided to them. These are not the same as O(1) operations in computer science. Examples of these constant-time comparison functions are:

  • Node.js: crypto.timingSafeEqual

  • Ruby on Rails: ActiveSupport::SecurityUtils secure_compare and fixed_length_secure_compare

  • Java: MessageDigest.equal (assuming you are not using an ancient version of Java). Whenever you compare secret values or cryptographic values (such as session keys), use a constant-time comparison instead of a normal comparison, unless an attacker cannot exploit the normal comparison timing.

Minimizing the Time Keys/Decrypted Data Exists

As per least privilege, we want to minimize the time a privilege is active. In cryptography, you often want to minimize the time a private key or password is available, or at least minimize the time that the decrypted data is available. This can be harder that you might think. At the operating system level, you can probably lock it into memory with mlock() or VirtualLock(); this will at least prevent the data from being copied into storage. Ideally, you would erase it from memory after use, though that is often surprisingly difficult. Compilers may turn overwrite code into a no-op, because they detect that nothing reads the overwritten values. Languages with built-in garbage collection often quietly make extra copies and/or do not provide a mechanism for erasure. That said, some languages or infrastructure do make this easy. For example, those using the .NET framework (e.g., C#) can use SecureString.

Incident Response and Vulnerability Disclosure

The nonprofit Forum of Incident Response and Security Teams (FIRST) defines a PSIRT as “an entity within an organization which focuses on the identification, assessment and disposition of the risks associated with security vulnerabilities within the products, including offerings, solutions, components and/or services which an organization produces and/or sells” (FIRST: Product Security Incident Response Team (PSIRT) Services Framework and Computer Security Incident Response Team (CSIRT) Services Framework). FIRST recommends that PSIRTs be formed while requirements are still being developed, but they should at least be created before the initial release of the software. A properly-running PSIRT can identify and rapidly respond to an extremely serious vulnerability report.

PSIRTs often work with computer incident response teams (CSIRTs); a CSIRT is focused on the security of computer systems and/or networks that make up the infrastructure of an entire organization, while PSIRTs focus on specific products/services. Should you have one (or want to establish one), FIRST provides useful frameworks describing what PSIRTs and CSIRTs should do within an organization (FIRST).

A simple short guide is the OWASP Vulnerability Disclosure Cheat Sheet which provides helpful guidance for security researchers (who find security vulnerabilities) and organizations (who receive vulnerability reports).

Many other valuable documents discuss vulnerability disclosure. In particular:

  • The CERT Guide to Coordinated Vulnerability Disclosure, in which the vendor is the organization that releases the software and needs to learn about the security vulnerability.

  • FIRST’s Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure.

  • There is an Open Source Security Foundation (OpenSSF) working group on vulnerability disclosures, which may in the future provide additional guidance: Vulnerability Disclosures Working Group.

In one sense, this requirement is easy. Decide the reporting convention, and make that information easy to find. Here are some standard conventions:

  • Many companies and projects support an email address of the form security@example.com or abuse@example.com.

  • A standard convention in OSS projects is to provide this information in a file named SECURITY.md in the repository’s root or docs/ directory.

    • If present, sites like GitHub will highlight this file and encourage its creation. Add a link from your README.md file to this SECURITY.md file.

    • If the project has or implements a website, a standard recommendation is to add a security.txt file on the website at /security.txt or /.well-known/security.txt. To learn more, visit securitytxt.org.

Monitor for Vulnerabilities, Including Vulnerable Dependencies

Monitor for vulnerabilities in the software and all libraries embedded in it. Google alerts can be used to alert about the software from various news sources. Use a software composition analysis (SCA) / origin analysis tool to alert about newly-found publicly-known vulnerabilities in the dependencies.

Software bill of materials (SBOM) is a nested inventory that identifies the components that comprise a larger piece of software. When an SBOM is available for a component used; it’s often easier to use that data to help detect known vulnerabilities. Many ecosystems have ecosystem-specific SBOM formats. Some SBOM formats support arbitrary ecosystems: Software Package Data Exchange (SPDX), Software ID (SWID), and CycloneDX.

Bug Bounty Program

A widely-used technique to encourage vulnerability reporting is a bug bounty program, where companies pay reporters to report significant defects, which is a cost-effective way to encourage people to report vulnerabilities once all relatively “easy-to-find” vulnerabilities have been found and fixed. If we don’t want to manage such a program, various companies can do that for a fee.

Be sure to clearly establish the scope and terms of any bug bounty programs (OWASP Vulnerability Disclosure). Specify what the company will pay for, including a minimum and maximum range. For example, X- Y for a vulnerability that directly leads to a remote code execution without requiring login credentials.” If there is a maximum that the company can spend in a year, say so, and indicate the total amount, the calendar used, and what will happen to reports after the annual funding is used up. Also, make it clear who is ineligible, e.g., software developers and/or employees of companies that develop the software.

However, beware: a bug bounty program can be an incredible waste of money unless the easy-to-find vulnerabilities are found and fixed first. As Katie Moussouris has noted, “Not all bugs are created equal”; many defects (such as most XSS defects) are easy to detect and fix, and “you should be finding those bugs easily yourselves too.” Using a bug bounty program to find easy-to-find vulnerabilities is highly costly and “is not appropriate risk management.” She even noted a case where a company paid a security researcher $29,000/hour to find well-known simple defects. Find and fix the simple bugs first. Then, a bug bounty program may make sense (Relying on bug bounties not appropriate risk management’: Katie Moussouris, by Stilgherrian, 2019).

Of course, once a vulnerability report is received, it must be responded to and fixed promptly. OWASP recommends the following (OWASP Vulnerability Disclosure):

  • Respond to reports in a reasonable timeline.

  • Communicate openly with researchers.

  • [Do] not threaten legal action against researchers.

  • You need to be able to triage vulnerability reports quickly; some reports won’t apply to your software or are not vulnerabilities.

  • It is pretty common to need to ask further questions to understand the vulnerability.

Limiting Disclosure and the FIRST Traffic Light Protocol (TLP)

When discussing a vulnerability, it is often necessary to discuss detailed information, yet simultaneously tell people to limit disclosure of some information for a period of time.

FIRST developed a simple marking system (Traffic Light Protocol (TLP)) often used to indicate to whom the information can be shared. Here is a summary. The TLP has four colour values to indicate sharing boundaries, which are placed as follows:

  • In email: the TLP colour is in the subject line and the body before the designated information.

  • In documents: the TLP colour is in the header and footer of each page, typically right-justified.

  • The TLP colour is shown in all-caps after TLP:, so you will see TLP:RED, TLP:AMBER, TLP:GREEN, or TLP:WHITE. These colors have the following meaning:

    • TLP:RED = Not for disclosure, restricted to participants only.

    • TLP:AMBER = Limited disclosure, restricted to participants organizations.

    • TLP:GREEN = Limited disclosure, restricted to the community.

    • TLP:WHITE = disclosure is not limited.

Get a CVE and Compute CVSS

We should request a CVE where appropriate, which has yet to be requested (OWASP Vulnerability Disclosure). Typically, we would start this process once we have verified that the report is a vulnerability. Thus we would do it simultaneously with fixing it. If we request a CVE, we should also calculate the vulnerability’s Common Vulnerability Scoring System (CVSS) score. CVSS is a rough estimate of a vulnerability’s severity.

Release the Update and Tell the World

Once the fix is ready, release it. You will need to tell the world the software is fixed, and do all you can to encourage rapid uptake of the fixed version. OWASP recommends that suppliers publish clear security advisories and changelogs, and also that suppliers offer credit to the vulnerability finder (OWASP Vulnerability Disclosure).

If there are workarounds that can be applied without updating the software, be sure to note those. This is particularly important if:

There are likely to be many users who need help to update their software, or the vulnerability is publicly known. Still, the patch will not be released for some time. Ensure that it is easy to update to the fixed version of the software automatically. If your software platform does not provide automated patch releases or installation, consider implementing one yourself. Users need to be able to quickly and automatically receive fixes unless they have expressly opted out of updates.

Be sure always to credit and thank vulnerability reporters unless they request otherwise. It is rude not to provide credit, and many vulnerability reporters provide reports primarily to get credit. Worse, reporters may only cooperate in the future if they receive appropriate credit.

Sending Vulnerability Reports to Others

The OWASP Vulnerability Disclosure Cheat Sheet recommends that security researchers (who find security vulnerabilities) should:

Ensure that any testing is legal and authorized. Respect the privacy of others. Make reasonable efforts to contact the security team of the organization. Provide sufficient details to allow the vulnerabilities to be verified and reproduced. Not demand payment or rewards for reporting vulnerabilities outside an established bug bounty program. Reporting a vulnerability that you have found can be surprisingly complicated. If there is a single supplier, we could report to just that supplier. But sometimes there are multiple suppliers and other stakeholders involved. There are also various ways you can choose to report a vulnerability.

Reporting Models

There are several different kinds of disclosure models:

  • Private Disclosure: “In the private disclosure model, the vulnerability is reported privately to the organization. The organization may publish the details of the vulnerabilities, but this is done at the discretion of the organization, not the researcher, meaning that many vulnerabilities may never be made public. The majority of bug bounty programs require that the researcher follows this model. The main problem with this model is that if the vendor is unresponsive or decides not to fix the vulnerability, the details may never be made public. Historically this has led to researchers getting fed up withcompanies ignoring and trying to hide vulnerabilities, leading them to the full disclosure approach.” (OWASP Vulnerability Disclosure).

  • Full Disclosure: “With the full disclosure approach, the full details of the vulnerability are made public as soon as they are identified. This means that the full details (sometimes including exploit code) are available to attackers, often before a patch is available. The full disclosure approach is primarily used in response to organizations ignoring reported vulnerabilities to pressure them to developand publish a fix. This makes the full disclosure approach very controversial, and many people see it as irresponsible. Generally, it should only be considered a last resort when all other methods have failed or when exploit code is already publicly available.” (OWASP Vulnerability Disclosure). Another reason to consider full disclosure is if there is reason to believe that the supplier is intentionally malicious; reporting a vulnerability to only a malicious supplier gives the malicious supplier more time to exploit the vulnerability.

  • Coordinated disclosure (historically called Responsible Disclosure) Coordinated disclosure “attempts to find a reasonable middle ground between these two approaches. … the initial report is made privately, but with the full details being published once a patch has been madeavailable (sometimes with a delay to allow more time for the patches to be installed).” (OWASP Vulnerability Disclosure). Historically, this has been called responsible disclosure but this is a biased term. Its original coiner now recommends calling it coordinated disclosure instead. There must be a time limit before the vulnerability will be unilaterally disclosed. This is identical to private disclosure without a time limit since the supplier may have little incentive to fix the vulnerability.

  • Disclosure to Attackers Some researchers work for organizations that attack others’ systems. Other researchers sell vulnerabilities to such organizations or to brokers who then sell the vulnerabilities on. Doing this is controversial, mainly when they are sold to brokers who do not disclose exactly who is buying the vulnerabilities. The impact of doing this varies because there is a great variety of organizations that pay for vulnerabilities. These organizations include law enforcementin various countries, militaries in multiple countries, organized crime, and/or terrorist groups. Anyone who provides vulnerabilities to attackers should consider the ethical implications. In particular, you should consider what the attackers will likely do with these vulnerabilities. Do you have confidence that the attackers will not use the vulnerabilities in contravention of human rights? Will they harm certain people or groups, such as ethnic minorities, political dissidents, or journalists? If you disclose vulnerabilities to attackers, you are supporting how these organizations will use those vulnerabilities to attack others; you should be confident that they will use them for good.

A good source for more information is FIRST’s Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure. Historically, many documents have focused on simple bi-lateral coordination between a security researcher and software supplier, but today there are often complexities due to the need for multi-party coordination.

Assurance

A practical alternative is creating an assurance case. An assurance case “includes a top-level claim for a property of a system or product (or set of claims), systematic argumentation regarding this claim, and the evidence and explicit assumptions that underlie this argumentation” (ISO/IEC 15026-2:2011). Let’s look at that definition; put another way, an assurance case includes:

  • Claim(s): Top-level claim(s) for a property of a system or product. That is, something that you want to be true.

  • Arguments: A systematic argumentation justifying this claim.

  • Evidence/assumptions: Evidence and explicit assumptions underlying the argument.

  • OWASP Security Knowledge Framework (OWASP-SKF) is an open source web application that explains secure coding principles in multiple programming languages. The goal of OWASP-SKF is to help learn and integrate security by design in software development and build applications that are secure by design. OWASP-SKF does this through manageable software development projects with checklists (using OWASP-ASVS/OWASP-MASVS or custom security checklists) and labs to practice security verification (using SKF-Labs, OWASP Juice-shop, and best practice code examples from SKF and the OWASP-Cheatsheets).”