State of Software Security Volume 11 Veracode Report
State of Software Security Volume 11 Veracode Report
State of
Software
Security
Contents
03
SECTION ONE
Executive Summary
18
04 The State of Software Security
at a glance
05 Nature vs. nurture
SECTION FOUR
30
09 How flawed are the applications?
10 Which flaws are more common?
13 How are applications scanned?
SECTION FIVE
15 Conclusion
31 Appendix: Methodology
SECTION THREE
Executive
Summary
“Every company is a software company.” 1
1
e had a tough time attributing the first appearance of this statement, but a likely
W
candidate is: Kirkpatrick, David. “Now Every Company Is a Software Company.”
FORBES 188.11 (2011): 98-+
Dynamic scans
76% The vast majority
Static analysis
T H E C O M B I N AT I O N
C LO S E S F L A W S FA S T E R
of applications
(76 percent) have
some sort of While many teams focus on static analysis, dynamic scanning can
security flaw, but uncover types of flaws that might be hard for static analysis to
only a minority find. And even though adding dynamic application security testing
(24 percent) have (DAST) will cause more flaws to be discovered, teams that combine
high-severity flaws. dynamic scans with static scans end up closing more flaws faster.
24%
OPEN SOURCE
FLAW TYPE
2 CRLF Injection
67%
Two-thirds of applications are
either maintaining or reducing
3 Cryptographic Issues
the total amount of observed
4 Code Quality security flaws between their
first and last scan.
5 Credentials Management
nurture
and closing them faster.
We found there are some factors that teams have a lot of control over, and those they
have very little control over — we’re thinking of them as “nature vs. nurture.” On one
side, the “nature” side, we looked at factors developers have very little control over —
size of the application and organization, security debt,2 and others. On the other side,
the “nurture” side, we looked at factors that developers have direct control over, such
as scanning frequency and cadence and scanning via API.
Remediate More/Faster
Remediate Less/Slower
EXPECTED C H A N G E I N H A L F - L I F E
The goal of software security isn’t to write applications perfectly the first time, but to
remediate the flaws in a comprehensive and timely manner. We know that it is easier to
find and fix issues in applications that have less coding baggage — small application size,
using modern languages and frameworks — but even with the “baggage,” development
2
ecurity debt actually straddles
S
both nature and nurture. Developers teams that use secure coding practices, such as frequently scanning for flaws, integrating
may inherit debt (nature), but it is a
choice whether to accumulate it or
and automating security checks, and taking a broader look at the application’s health,
pay it down (nurture). are more likely to have better success with their secure software development efforts.
Current State
of Software Security
“This one goes up to 11.” NIGEL TUFNEL, SPINAL TAP
Last year, Veracode celebrated 10 editions With Volume 10, we spent some time
of the State of Software Security report. looking at how much things had changed
This year, we figured we might as well turn in the decade spanning Volume 1 to
it up to 11. Over that decade++, the State Volume 10. With Volume 11, we are going
of Software Security report has grown as to look forward and consider the direction
software security has grown. Veracode has software development is headed. We are
seen exponential growth in applications not trying to decide if we are doing better
scanned this year compared to the first or worse than before, but looking at what
edition in 2009 (over 130,000 applications kind of impact the decisions developers
this year). But the number of applications make have on software security.
isn’t the only thing that’s grown in 11 years.
New languages and frameworks have We asked some of the same questions:
appeared, and old standbys have risen, how common are application flaws? Which
fallen, and risen again. Development flaws are more common? But we also dug
practices have evolved. New threats and deeper in some areas than we have in
pitfalls rear their ugly heads. This report the past, such as examining third-party
has always kept pace with the shifting libraries in applications. It may not make
sands of secure application development, sense to directly compare this report
and this year is no different. with previous volumes because of the
underlying differences in the data and
This year, we are also expanding the findings, but there is plenty of insight
scope of the data we are analyzing. In that developers and application security
previous volumes, we looked at the active teams can use to make decisions on
development of applications in a one-year how to improve their applications.
time frame. This year, we are going back
in time a bit further, and looking at the
complete history of applications that were
actively developed in the past year. So
we’ll get a fuller view of the origin story
of an application, along with all its flaws.
OWASP 65.8%
SANS 58.8%
THE GOOD NEWS The revelation that most applications have some form of flaw should not be
earth-shattering to anyone reading this report. Even so, we want to be clear that having
It appears we are a flaw in the application is just part of the story. We know that developers treat different
moving in the right types of flaws differently. Some flaws are fixed quickly, while some are considered less
direction when we severe and can be moved to the back burner. It’s instructive to compare applications
consider the severity based on how many have severe3 flaws.
of the flaw. The good news is that it appears we are moving in the right direction when we consider
the severity of the flaw: There are fewer applications with severe flaws than ordinary
There are fewer
run-of-the-mill flaws. Sixty-six percent of applications have at least one flaw that
applications with appears on the OWASP Top 10, and 59 percent of applications have at least one flaw that
severe flaws than appears on SANS 25. After the most recent scan, 24 percent contain high-severity flaws
ordinary run-of- (those rated by Veracode as level 4 or 5), which is a slight increase from the previous
the-mill flaws. report’s 20 percent, but still within range of past years’ results.
A message that we’ve previously shared, but it bears repeating: This is a good sign. Most
applications have flaws, but not all flaws are catastrophic, and the more severe the flaws
are, the more likely it is that any particular application will be free of them. A little over
three-quarters of the applications may have at least one flaw, but most of them aren’t
the critical issues that pose serious risks to the application.
We take several different approaches to viewing the severity of a flaw. The OWASP Top 10 (2017) lists the most common
3
critical flaws in web applications, and SANS 25 (recently renamed to CWE/SANS Top 25) lists common critical flaws found
in modern software development. Lastly, we assign our own severity rating (a scale of 1 to 5) based on the flaw type and
language. Developers can adjust that severity rating manually, since they are the ones with the context on how a flaw
would impact their application.
With flaw density, we observe a trend similar to what we saw in applications with
flaws. Flaw density is lower when we focus on high-severity flaws. Figure 2 tells us
that applications have problems that need to be fixed, but most of them are not
riddled with catastrophic issues.
Any Flaws
OWASP Flaws
SANS Flaws
Critical Flaws
We understand that this is not a perfect apples-to-apples comparison for all applications. For example, different
4
09 Veracode State of Software Security: Volume 11 languages are more or less verbose when producing semantically identical code.
SECTION 2
Developers and security teams rely on these lists to figure out which flaws are considered
to be highest risk and to prioritize getting them fixed. Injection flaws make up the first
item in the OWASP Top 10 Web Application Security Risks, and with good reason, as our
chart shows. CRLF injection was found in more than 65 percent of applications with
a flaw, and SQL injection was among the top 10 list of most common flaws found.
CRYPTOGRAPHIC ISSUES
As developers are increasingly tasked to protect data as they move in transit or in
storage, there are opportunities to make mistakes in how they handle cryptography.
Cryptographic issues include a variety of weak password mechanisms, weak
pseudorandom number generators, and generally bad cryptography implementations —
many of which are the result of using outdated cryptographic libraries, or trying to roll
their own.5 Implementing cryptography incorrectly can be just as problematic — if not
more — for the application than not having any cryptography at all.
CODE QUALITY
Code quality is a tricky category, since it refers to weaknesses that indicate the
application has not been carefully developed or maintained, and does not directly
introduce a vulnerability in the application. Code quality is an issue because it causes
the application to behave unpredictably, and that erraticness can be abused.
UNCOMMON FLAWS
What is heartening is that flaws that we might think of as particularly damaging are also
relatively uncommon. Less than 5 percent of applications have the types of flaws (buffer
management, buffer overflow, code injection, etc.) we could expect to be abused and
lead to remote code execution or other problematic results. Part of that is because many
modern languages and frameworks have built-in capabilities to address whole classes
of flaws. The shift away from C++ in newer applications means fewer buffer management
flaws, broadly speaking. Using higher-level languages (or language frameworks) and
standardized libraries makes it easier for developers to avoid certain types of flaws.
In these crazy times, it may seem nigh impossible to come together as a community and agree on a single, universal truth.
5
But for the good of our society (and application security), let’s all agree that nobody should be rolling their own crypto.
Coding Standards
M EDI A N FL A W DE N S I T Y (WH E N OB S E RVE D)
CRLF Injection
10.0
Cross-Site Scripting (XSS)
Information Leakage
Directory Traversal
Cryptographic Issues
Insufficient Input Validation
SQL Injection Code Quality
Authorization Issues
Other Authentication Issues Credentials Management
Error Handling Encapsulation
Numeric Errors API Abuse
1.0 Buffer Management Errors Session Fixation
Deployment Configuration
Time and State
Race Conditions Untrusted Initialization Information leakage, CRLF
Buffer Overflow Command or Argument Injection
Code Injection injection, and cryptographic issues
Server Configuration are all in the rightmost part of
Potential Backdoor
Insecure Dependencies Untrusted Search Path the figure because they exist in
many applications, and hoo-boy when
Dangerous Functions they do, they appear a lot.
Format String
0.1
0% 20% 40% 60%
CWE-16 Configuration
Figure 5: Percentage of applications with various CWE types in static vs. dynamic scanning
DYNAMIC SCANNING These are complementary methods and should not be considered as subsets of each
other or replacements as they bring their own strengths to application security. Think
Some flaws become about annual health checkups. You get bloodwork done and have a physical because
more prevalent when they look for different things. You don’t assume you are healthy on the basis of one test;
dynamic scanning is you wait for all your test results.
used in conjunction
Figure 5 highlights how much deeper the scanning goes when dynamic scanning is added
with static analysis. to the static scanning that is already being done. Some flaws become more prevalent
when dynamic scanning is used in conjunction with static analysis. Both static scanning
and dynamic scanning can find issues such as using sensitive cookies in HTTPS sessions
without the secure attribute, but dynamic analysis scanning is more likely to find them
much more frequently than static analysis.
Dynamic scanning will uncover issues that are not part of your code, but rather in
how the environment is set up. The application exposing sensitive information through
environmental variables is a problem that exists in 60 percent of applications. It is a
flaw that will not be uncovered if the developers are relying only on static analysis.
The Tale of
Open Source
Flaws
Earlier this year, we published a “spinoff” version
of the SOSS focused on open source flaws. We’ll
likely do that again but felt compelled to include
some statistics related to open source code in this
report as well. Even if developers wave a magic
wand and voila! all the flaws we’ve discussed so far
disappear from their own code, that doesn’t mean
applications would become flaw-free. It’s never
that easy in software security.
For example, Java applications (shown at the top of Figure 6) cluster to the right,
indicating that they tend to be almost all third-party code — and indeed, the typical
Java application is 97 percent third-party code! However, that pattern does not emerge
with other languages. JavaScript and Python applications cluster at both ends, much
like a barbell — so from a pure code-volume perspective, applications tend to be mostly
homegrown or composed mostly of third-party libraries. C++ and PHP cluster completely
in the opposite direction of Java, indicating the codebase is mostly homegrown. Only
.NET applications seem to be fairly spread out, suggesting developers tend to be a bit
more flexible in how libraries are used.
Java
.Net
JavaScript
Python
C++
PHP
The ubiquitousness of open source libraries was evident when we released the State
of Software Security: Open Source Edition report. However, there was something else
we learned there; about seven in every 10 applications were found to have flaws in
their open source libraries (on initial scan). This alone should warrant adding software
composition analysis into any software security program. But we can take this one step
further this time around. We looked at how many flaws were found in open source
libraries and compared that to how many flaws were found in the primary application
(code written in-house), and we found about three in every 10 applications have more
flaws in their open source libraries than in the primary code base.
One last little insight we found here: There is almost no correlation between the flaw
density in open source library flaws versus those in the primary application. This means
that it’s possible to have a very well buttoned-up application, yet vulnerabilities may be
exposed through third-party libraries.
KEY LESSON
Fixing Software
Security
It is inevitable that software will have flaws, so
until now we’ve focused on understanding what it
means when we say that applications have flaws.
However, accepting there will be flaws does not
mean there is nothing that can be done. Indeed,
many companies (including Veracode!) make it
their business to help developers write more
secure code. Software security depends on how
development and application security teams
address the issues that exist in the applications.
We look at the question of how applications are
fixed from multiple perspectives.
SANS 25 77.9%
OWASP 10 76.0%
FIX RATE
Figure 8: Fix rate for
various severity types
One thing to remember about the report is that we are comparing the application’s
first scan results with the latest one within a 12-month period (April 2019 to March 2020).
While it’s heartening to know that nearly three out of four flaws are being closed, bug
hunting becomes a game of whack-a-mole if new bugs are being introduced at the
same pace as the fixes being made. The chart comparing flaw density between the
first and last scans illustrates whether the fixes are being made faster than new
ones are introduced.
The increase can be explained partly by the fact that we changed how we looked at the data in this year’s report. In previous
6
years, we looked at flaws only that were active during the report period (including everything still open from before the year).
However, for this report, we analyzed the full history of active applications, so the number of closed flaws reflect all the flaws,
including those that were closed before the report period.
While the number of flaws in applications ebb and flow over time, for the majority
of applications, the overall flaw density is decreasing over the course of development.
Generally speaking, more applications reduced the flaw density, as half of the
applications had fewer flaws on the latest scan than in the first scan. Flaw density
was higher for 34 percent of the applications, suggesting the development teams were
not prioritizing fixing flaws as they went along, but were perhaps saving them for later.
That picture gets sharper when considering the seriousness of the flaws. When looking
at only high-severity flaws, roughly twice as many applications (23 percent) reduced the
overall flaw density than those that increased (12 percent).
PRIORITIZATION
P E R C E N T OF A P P L I C A T I ON S
Figure 9: Flaw reduction
for various flaw types
HIGH-SEVERITY FLAWS
Regional differences
We wondered if there While there are some variations, for the most part, developers don’t change their
were differences in behavior based on where they are located. Roughly three out of four flaws are being
how quickly flaws were closed in the EMEA (Europe, Middle East, and Africa) region, as well as in the Americas
being fixed across (North America, Central America, and South America). Closer to three out of five flaws
different geographies. are being fixed in APAC (Asia-Pacific), although that picture is reversed when we
focus on only the high-severity flaws. For high-severity flaws, EMEA and the Americas
continue to keep pace with each other, at 85 percent and 82 percent, respectively,
but 91 percent of high-severity flaws are being closed in APAC.
FLAW DENSITY
ALL FLAWS
ALL F LAWS Reduced Amount of Flaws Increased Flaws
HI GH SE VE RITY H IG H SE V E R IT Y
Figure 10: Flaw prevalence across regions Figure 11: Fix history for various regions
FIXING FLAWS
MEDIAN
86 days
Closed
Findings
MEDIAN
216 days
Open
Findings
A G E ( DA Y S)
The median time-to-close only focuses on part of the data (the remediated flaws), and so
it tells only part of the story. It tells us something about 76 percent of the flaws that were
actually closed, but when a new flaw is discovered, we don’t know if it will be like the
76 percent of closed flaws, or like 24 percent of flaws that remain open. Luckily, we aren’t
the first people to run into this, and there are better techniques7 we can apply. When we
account for both the closed and open flaws, we find it takes about 180 days (6 months)
to close half of the flaws discovered. That’s a far cry from the 86 days, but it paints a
much more realistic picture since it leverages all the information at our disposal.
We apply statistical methods collectively referred to as “survival analysis” and the label given to flaws that
7
are still open (or events that haven’t occurred yet) is “censored data” if you’d like a fun search term.
100%
25%
1 in 4 flaws remain open
after a year and a half
0%
6 months 1 year 1.5 years
TIME
Figure 13: Survival curve
of flaw closure
Figure 13 shows the full picture of the expected remediation timeline and has a few
annotations calling out milestones along the remediation path.
Simply put, while many flaws are being addressed promptly, older flaws tend to
linger over time. There are several reasons to explain why. The development team
may be rationalizing against fixing the flaw because it hasn’t caused any problems,
yet, or thinking that there is no need to spend additional time on a legacy application.
Another reason for the lingering flaws could be logistics. The team may not have the
capacity to devote the time to fixing flaws, especially if there is more emphasis placed
on developing new features rather than reducing security debt.
Scanning frequency and cadence are but two aspects of software development in
a sea of possibilities. For example, Figure 14 depicts how long applications took on
average to close 50 percent of their open flaws split out by their scan frequency. Clearly,
applications that scan infrequently (less than 12 times in a year) spent about 7 months to
close half their open findings, while applications that scanned at least daily (on average)
reduced that time by more than a third to close 50 percent of flaws in about 2 months.
Scan frequency and the cadence of scanning are two things the developer directly
controls, but there are many others. Software security depends on a combination of
the applications’ environment and developer practices: nature and nurture. A developer
dropped into an application has little control over the maturity of the codebase, its
history, or size: the application’s “nature.” However, how the developer chooses to
“nurture” that application is well within his or her control — how often the application
is scanned, the cadence with which it’s scanned, what types of scanning are done,
and how third-party code is managed.
13–52 Scans
monthly-weekly average
124 days
53–260 Scans
weekly-daily average
77 days
260+ Scans
daily+ average 62 days SCAN FREQUENCY
We recognize that a developer inheriting a large, mature codebase that is just being
maintained faces a very different set of challenges than a team that is starting out with
a smaller, more focused application, but we can see that developers have some control
over the security of their application.
effects of nature Size of the organization How many times in a year the application
vs. nurture on measured by revenue was scanned (with SAST)
the remediation
rates of flaws APPLICATION AGE SCAN CADENCE
in an application. Older Applications Steady Scan Cadence
Because this is How long an application has Measures the variation in how frequently the
somewhat of a been using Veracode (days applications are being scanned and ranges
big undertaking, since the first recorded scan) from regular, steady scanning (typically because
scanning is part of continuous integration) to
let’s be precise
APPLICATION SIZE bursty and sporadic scanning (followed by long
about what we
Larger Applications periods of no scanning)
are examining.
First, we look at Size of the application
DYNAMIC ANALYSIS
the type of things measured in mb
DAST with SAST
that are, for the
FLAW DENSITY The application is being scanned using
most part, out of
High Flaw Density dynamic analysis
developers’ hands.
Calculated as flaws per 1 mb
SOFTWARE COMPOSITION ANALYSIS
of code, a way to think about
and capture “security debt” SCA with SAST
in applications The application’s open source libraries
are being scanned
API INTEGRATION
SAST through API
If the application uses the API to run the
scanner, and suggests the developers are
following continuous integration practices
for pipeline automation
The size of the organization can The only factor we haven’t yet discussed thus far in the report is API integration.
8
In the previous section, we discussed that it wasn’t enough just to see if a flaw gets
closed or not because we also want to know how quickly that flaw gets closed. To that
end, we built a model that accounts for both the open and closed flaws that is able to
account for multiple facts and can quantify the effect of various “nature” and “nurture”
factors on how quickly flaws are closed.
First, we extract from the model how each factor changes the median time to flaw
remediation. We want to see which factors are likely to lead to flaws getting fixed faster,
and which factors lead to slower fixes. The results are seen in Figure 15.
Remediate More/Faster
Older Application 3
Larger Organization 14
Larger Application 57
Remediate Less/Slower
Factors pointing to the left are correlated with flaws being remediated more/faster,
while those pointing to the right are associated with less/slower remediation. Some
of the factors are binary, such as whether dynamic scanning is turned on or off for
the application, and others are continuous, such as how frequently an application is
being scanned. For continuous variables, the effect represents a shift of one standard
deviation in the variable. Encoding the continuous variables this way allows a relatively
easy comparison across the disparate scales for each variable.
THE “NATURE” One thing that is clear is that one of the biggest obstacles for developers is a ponderous
OF APPLICATIONS application with a dodgy security history. Large applications with high flaw density slow down
the remediation rate of flaws by about 2 months each. Now with our nature versus nurture
analogy, it’s not typically possible to change any of the factors on the nature side. But we are
talking about applications that we created, so we should have some influence in the nature
we create or even the nature we are handed.
THE “NURTURING” But there is hope, as there are several things that a developer can have more direct
OF APPLICATIONS control over, and those are the things we generally associate with good development
practices and faster flaw remediation.
API INTEGRATION
There are positive, though smaller, effects for API integration (again building security
scanning into the developer pipeline), software composition analysis, and setting up
a steady scanning cadence. We noted earlier that the API integration can be linked to
scan frequency, and we see that relationship in this chart. Developers should ensure
that they reap the benefits of frequent application scanning by making sure the
API is part of their development workflow.
We should pause for a moment and consider these results in a larger context. The results
above echo what people in application security have assumed. But suspecting something
is very different from actually having it confirmed empirically in the data, and to our
knowledge, this is the first time someone has taken these assumptions and measured it.
When we tell developers the performance of teams with specific behaviors are different
from those without those behaviors, we can now show and talk about just how much
they differ. We can clearly see the impact of security debt on older applications here,
as it slows down the pace of fixing flaws by months and hampers future development.
Figure 16 shows how quickly each team — with positive and negative behaviors — closes
the flaws in each application — with positive and negative attributes. The slowest line
here (the top line closest to the upper right) represents an application challenged with
negative attributes and negative behaviors of the team. They tend to close quite a bit
slower and less flaws than anyone else. The quickest line here (the bottom line closest
to the lower left) represents the best case: positive attributes of the applications with
a proactive development team with positive behaviors. In reality, most applications
will fall between those two with a mix of attributes and actions.
What’s interesting here is the impact good practices can make. In our idealized “good”
application, having good practices mean 50 percent of flaws are closed in just under
2 weeks (13 days), while bad practices on that same application can mean it will take
almost twice that time (25 days) to close 50 percent of flaws. The differences are even
more stark when looking at “bad” applications. A team with bad practices working on a
less-than-ideal application may take nearly a year (314 days) to close 50 percent of flaws.
A team with good practices on that same unideal application would cut that time to
about 6 months (184 days).
100%
GOOD ACTIONS +
GOOD ENVIRONMENT
from 25 to 13 days
GOOD ACTIONS +
POOR ENVIRONMENT
60%
Can reduce half-life
by more than 4 months
SLOWEST LINE
Poor attributes
20%
and good actions
Good attributes
Q UICKEST LINE and poor actions
Good attributes
and actions
0%
6 months 1 year 1 year 6 months
Conclusion
Even if the developer has inherited an Embedding security testing into
old, gargantuan application with heaps the pipeline (through an API) is
of security debt, and there is no one another sign of the team’s approach
left who remembers why some things to continuous integration. That
were coded that way, fixing flaws and automation can tighten up the
adding new features don’t have to cycle of feedback developers
continue being difficult. What the data receive and make security testing
tells us is that even when faced with more effective, and indeed we see
the most challenging environments, improved remediation times with
developers can take specific actions that integration.
to improve the overall security of the
We’ve looked at the effect of nature
application. Several of the developer
and nurture on the security of our
best practices we highlight in this
applications. We found that nurture
SOSS align closely with behaviors we
— our decisions and actions — can
typically associate with DevSecOps.
overcome and improve the nature
Scanning applications frequently of the application and environment.
and on a regular cadence and fixing There are many solutions available
the flaws as they are found (and not for developers to help them
waiting for major releases) is common discover and manage the flaws
practice among DevSecOps teams. that creep into applications.
We see the effects of using different
types of scanning technologies in
order to get a more comprehensive
view of the application. We see how
fixing flaws in smaller and newer
applications tend to be quicker,
which may encourage decisions
But know this: you are able to take action
such as re-architecting parts of the and make decisions that will improve the
application to smaller components. security of your application!
Appendix: Methodology
Veracode methodology for data analysis uses a sample of applications that were A NOTE ON MASS CLOSURES
under active development from a 12-month sample window. The data represents While preparing the data for
the full history of applications that had assessments submitted from April 1, 2019 our analysis, we noticed several
through March 31, 2020. This differs from past volumes of the State of Software large single-day closure-events.
While it’s not strange for a scan
Security, as we only looked at the assessments that occurred in a 12-month
to discover that dozens or even
window and not the entire history of applications. This accounts for a total of
hundreds of findings have been
132,465 applications, 1,049,742 scans, and 10,712,156 flaws. The data represents large fixed (50% of scans closed three
and small companies, commercial software suppliers, software outsourcers, and or less findings, 75% closed less
open source projects.9 In most analyses, an application was counted only once, than 8), we did find it strange
even if it was submitted multiple times as vulnerabilities were remediated and to see some applications closing
new versions uploaded. For these snapshots, we examine the most recent scan. thousands of findings in a single
scan. Upon further exploration,
For the software component analysis, each application is examined for we found many of these to be
invalid: developers would scan
third-party library information and dependencies. These are generally collected
entire filesystems, invalid branches
through the applications build system. Any library dependencies are checked or previous branches, and when
against a database of known flaws. they would rescan on the valid
code, every finding not found again
The report contains findings about applications that were subjected to static would be marked as “fixed.” These
analysis, dynamic analysis, software composition analysis, and/or manual mistakes had a large effect: the top
penetration testing through Veracode’s cloud-based platform. The report one-tenth of one-percent of the
considers data that was provided by Veracode’s customers (application portfolio scans (0.1%) accounted for almost
a quarter of all the closed findings.
information such as assurance level, industry, application origin) and information
These “mass closure” events have
that was calculated or derived in the course of Veracode’s analysis (application
significant effects on exploring
size, application compiler and platform, types of vulnerabilities, and Veracode flaw persistence and time-to-
Level — predefined security policies which are based on the NIST definitions remediation and were ultimately
of assurance levels). excluded from the analysis.
Any reported differences (between languages, scan types, flaw types, etc) A NOTE ON “SANDBOX” SCANS
are statistically significant at the p < 0.001 level. Because of the large data Developers will sometimes create
size we are able to discern even incredibly small effect sizes. a “sandbox” for the purpose of a
one time evaluation of a piece of
code. Unfortunately, these scans
are divorced from any information
about the application and its
history. In the future we may
examine how the use of these
sandbox scans might affect the
mainline analysis of applications.
For now, these scans are excluded
from the analysis.
Here we mean open source developers who use Veracode tools on applications in the same way closed
9
source developers do. This is distinct from the software component analysis presented in the report.
Copyright © 2020 Veracode, Inc. All rights reserved. All other brand names,
product names, or trademarks belong to their respective holders.