Fs Isac Third Party Security Controls
Fs Isac Third Party Security Controls
White Paper
Contents
Working Group Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Mandate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Summary of Controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
According to PWC, detected security incidents have increased 25% this year, while the average financial costs of
incidents are up 18%.1 It is increasingly critical that organizations understand the risk associated with sharing data
with third parties, however few organizations take this step. In fact, only 20% of organizations evaluate the security
of third parties with which they share data or network access more than once a year.2 This trend of ignoring the risk
posed by third parties cannot continue.
For this reason, the FS-ISAC Product & Services Committee asked several member firms to form the Third Party
Software Security Working Group to determine what additional software security control types would be appropri-
ate to add to vendor governance programs. The Third Party Software Security Working Group was established with
a mandate to analyze control options and develop specific recommendations on control types for member firms
to consider adding to their vendor governance programs.
2
PWC Global State of Information Security Survey 2013 – October, 2013
Executive Summary
Third party software is the new perimeter for every financial institution. According to Gartner, “since enterprises
are getting better at defending perimeters, attackers are targeting IT supply chains.”3 Further, recent breach
reports such as Verizon’s Data Breach Investigations Report underscore the vulnerability of the application layer,
including third party software. This new perimeter of third party software must be addressed.
Fortunately, the majority of financial services firms and many technology vendors are investing in improving
software security control practices within the lifecycle of software development to provide products and capabili-
ties that are more resilient to attack. Pushing innovation in the marketplace while protecting information assets
exposed in emerging technologies (like mobile computing or cloud services) is a continual challenge and dilemma
for financial services firms. The financial services industry has historically provided leadership in the development
of effective vendor management practices to reduce the risk of exposure of customer and employee information.
Financial institutions have led the implementation of effective governance models for third parties providing IT
products and services for over a decade. Many IT vendors have incorporated prudent risk management controls
into their product development processes as a result.
Evolving vendor governance practices have helped to improve information risk management for firms applying
these standards and for vendors that incorporate these standards into their product development. Codified practices
like the Shared Assessments Program have had a positive effect in encouraging vendors and industry firms to work
collaboratively with the goal of improving information risk management practices and better serving customers and
employees. However, as the financial services industry increases their reliance on third party software services and
products, we are seeing an increase in the number of breaches stemming from software vulnerabilities. This trend
necessitates improved controls operating in concert with vendor management practices to advance the relationship
between security and third party software service providers and commercial off-the-shelf software (COTS) vendors.
Mandate
The mandate of the Third Party Software Security Working Group is to identify control types to incorporate
with vendor governance programs in order to improve information protection capabilities when using third party
services and products in the supply chain for financial institutions’ customers and employees.
Selecting controls to add to a vendor governance program requires collaboration between third party suppliers
and financial institutions. Many software service and product vendors have adopted industry leading practices for
software security embedded in their product development processes such as BSIMM (www.bsimm.com). Several
leading commercial software providers have codified software security practices in an effort to encourage other
providers to adopt leading software security practices through the SAFECode organization (www.safecode.org).
Gartner, “Maverick*Research: Living in a World Without Trust: When IT’s Supply Chain Integrity and Online Infrastructure Get Pwned” October 2012
3
Controls
Each control type was evaluated based on practices that were adopted and implemented by one or more financial
institutions and the institutions’ experiences with the control type. In all three controls, several of the Working Group
members had practical experience with the implementation of the control type for third party vendors. For the
second and third control types there are currently two preferred vendors that offer solutions satisfying the control
requirements defined by the Working Group. The specific vendor solutions were discussed in depth by Working
Group members. During these conversations members shared implementation experiences and evaluated of the
effectiveness of the control type. The Working Group chose not to identify a single vendor solution for each control
type but agreed to share experiences with vendor products with other financial institution members.
1 vBSIMM process maturity A derivative of BSIMM that specifies selected practice areas of BSIMM specific
assessment to vendor supplied software and uses the vBSIMM activities to determine
process maturity of the product development function of the vendor.
2 Binary static analysis A determination of software vulnerability density for a specific version of
software at a point in time provided through a third party administered process.
This analysis is done against the software’s binaries not the source code.
3 Policy management and This control type identifies consumable open source libraries for a given
enforcement for consumption Financial Institution, identifies the security vulnerabilities by open source
of open source libraries and component and enables the Financial Institution to apply controls or
components governance over the acquisition and use of open source libraries.
Summary of Controls
The Working Group recommends adoption of the three control types for each financial services member firm. The
financial institution will determine how best to implement the control type into its vendor governance and software
acquisition process and which vendor services or products to use.
The first control type, vBSIMM process maturity assessment, does not require member organizations to
implement any vendor product or solution, however the Working Group recommends that financial institutions
participate in education and certification of software security assessment professionals by using education,
training and certification services provided through FS-ISAC. The second control, binary static analysis, and third
control, policy management and enforcement for consumption of open source libraries and components, both
require the use of vendor services or products.
The Working Group does not endorse any of the vendors or products within the control types. The Working
Group believes that each of the three control types are required for financial institution member firms to achieve
third party software security, and will share information on implementation experience to help other financial
institutions with adoption and implementation.
Each of the control types is covered in more depth in the subsequent pages of this paper based on the evaluation and
assessment work completed by the Working Group members. In some cases, vendor management professionals will
need help from software security assessment professionals to effectively introduce the control types to the vendor
management program in their respective financial institutions. In the third control type (open source supply chain
governance) the implementation of the control type will not likely require the involvement of vendor management
professionals. The implementation is more likely to be administered and enforced by application development leads,
software quality assurance professionals or architects with the application development function.
For this reason, BSIMM was considered as a potential source of measuring the software security maturity of vendors
providing products and services to financial institution member firms. Vendor BSIMM (vBSIMM) was derived from
BSIMM activities and practices that apply to third party software development by several financial firms represented
on the Working Group. vBSIMM provides a method for measuring software security maturity across vendors that
use different technical tools, methods and techniques to improve software security maturity. The key is that multiple
controls or “touchpoints” are required in the software development process to achieve maturity.
1
Software security process
maturity assessment
3 2
Security Open Source Binary Dynamic Penetration Web Application
Architecture Review Security Validation Static Scan Scan Testing Firewall
Development Q/A
AGILE
Development Q/A
This information can be used to consider adjustments to the application/product design to make the application
more resilient to common attack vectors or methods. Threat modeling is difficult to teach to application designers
and often requires an apprenticeship where one professional learns techniques and methods from another more
experienced professional over time. For vendor products, this type of analysis during design may end up providing
information that may drive changes to product functionality (adding security features to the product).
If a development team has fewer than six developers then a manual code review by a security expert is a funda-
mentally sound approach to addressing security defects in the code. If there are 60 developers, then approxi-
mately 10 reviewers with the right level of security skill are required to spot security defects. Finding the required
number of software security professionals to do manual code review gets more difficult the larger the development
team. In practice, development teams over 50 developers need to consider other alternatives to improve security
in the coding process. The use of commercial products to complete automated code reviews (code scanning) for
both quality and security is increasing but it is more prevalent with financial institutions than with COTS software
vendors. The Working Group developed a “rule of thumb” for product development teams over 50 developers: the
vendor needed to acquire and use a commercial product to do code scanning in development to achieve a high
level of maturity for this practice. Using different commercial products to do code scanning across development
teams does not influence control maturity as long as a method for enforcement was in place in each case.
Dynamic testing tools for security are not a requirement for system testing but they do make it easier to implement
testing procedures for larger development teams. The Working Group found that many of the vendor firms in the
example of applied vBSIMM would capture security vulnerabilities from the testing processes and prioritize the
results based on risk and then assign responsibility for remediation in the bug tracking process.
Many COTS vendors have developed sophisticated processes for defect tracking and management, and some
even feed security testing results into this process. Some vendors do not allow applications with security vulner-
abilities to be promoted into production without approval from a senior officer (e.g. CTO). A few software vendors
include manual penetration testing as part of their security testing process, however most software vendors only
used manual penetration testing services upon client request.
Several vendors had their own ethical hacking teams internally that conducted pen testing of their products and
these teams were always separate from the development teams. Results were often shared with the CTO or head
of development to assign responsibility for remediation. The remediated products were then retested before
release. Often vendors applied pen testing to an entire release prior to production as a standard process.
4
To read the full example, see Appendix 1
To read the full example, see Appendix 1
5
One of the most significant lessons learned in the initial vBSIMM implementations6 was how differently the
contract terms and conditions were interpreted and implemented by various software service providers. For
example, each provider that participating in a sample program (Appendix 1) spent many weeks negotiating a
contract that included a notification clause with the bank; however the interpretation of how to enforce the clause
varied greatly. To understand these differences, the enterprise running this sample program asked participating
vendors many questions about interpreting the requirement to notify customers within a reasonable time frame
of any security incident.
6
To read the full example, see Appendix 1
vBSIMM Results
The results of vBSIMM are
delivered as three documents:
• A spreadsheet used to
document the assessment
results
• Assessment artifacts
provided by the vendor.
An example of a vBSIMM
assessment is provided
to the right. Figure 2: Sample vBSIMM Result
The spreadsheet documenting the assessment results has a column for recording the maturity score of each
practice area. Practice area observations are recorded in another column. The action items are also captured in
the spreadsheet and they are tracked in the vendor management system for follow up in the next review or sooner
when specified.
The highest maturity score is 15 out of 15 (5 practice areas x 3). However one or more practice areas may not apply
to some types of vendors. To normalize the scores across all vendors, the maturity score is divided by the number
of practice areas. The result is a percentage used to measure results across all vendors in different categories.
The Working Group recommends that the completed assessment results spreadsheets7 be shared across all
financial institutions through an information sharing process established through the FS-ISAC portal. This recom-
mendation was made to the FS-ISAC Product & Services Committee and is under consideration. The Product &
Services Committee is preparing a preliminary design of the information sharing process. One design goal is to
include an explicit approval by vendors to share assessment artifacts with all FS-ISAC members. Both the Product
& Services Committee and the Working Group recognize that the ability to share one vBSIMM assessment with
multiple financial institution clients is advantageous for the vendor since the vendor does not have to complete
multiple vBSIMM assessments for multiple financial institutions.
The Working Group also recommends The education would be offered several
that an education and certification program times a year and would be applicable for
be administered by the FS-ISAC to certify vendor governance as well as information
vBSIMM assessors for consistency and security staff from member firms.
quality of the assessment process. The The Product & Services Committee will
FS-ISAC is considering accepting bids from review the need for certification with the
third party vendors to conduct the education potential information sharing capability and
and certification process for financial insti- make a final recommendation to the FS-ISAC
tution members. These vendors must have Board for approval of this control type and
previous knowledge and experience related support requirements.
conducting vBSIMM assessments.
The vBSIMM approach satisfies the need for assessing the process maturity of third party software vendors and
services providers. The challenges with this control are ensuring consistent quality of assessment results and
training vendor governance professionals to understand software development practices well enough to complete
assessments. Each financial institution will manage its own unique vendor assessment practices, and must
determine the appropriate integration of vBSIMM assessments with its existing vendor governance practices.
The Working Group considered two leading providers of binary static scanning services—Veracode and HP
Fortify on Demand. The majority of Working Group members are using one or both services today. One vendor
is not recommended over the other. However, a summary of the differences based on Working Group member
experience is provided at the end of this section.
Similar to the detective control of using static analysis in the development process, binary static analysis uses a set
of tests to identify software vulnerabilities, optimizes the results to reduce false/positives, and uses a set of rules to
represent the policy that is enforced.
Artifacts
The Working Group recommends that a number of artifacts be prepared in advance of engaging in static binary
analysis to help the third party software vendors understand and comply with the vendor governance requirement
for binary static scanning including:
1. Letter from the head of the vendor governance function (consider asking the CIO to sign the letter as well).
3. A defined set of responsibilities for the third party software supplier, the security analysis provider (the
Working Group reviewed Veracode and HP Fortify On Demand) and the enterprise throughout the process.
4. An artifact describing the risk classification definition and the recommended remediation time based
on the severity of the vulnerability.
5. A sample artifact that the third party software supplier receives from the security analysis provider
and a sample summary of what the enterprise receives from the security analysis provider.
• That this level of product information (i.e. software vulnerabilities) was not shared in the past.
• A recognition that the vendor’s remediation prioritization must balance the level of risk posed by
discovered vulnerabilities and the resources required to fix defects and create new functionality.
• The enterprise and the vendor need to come to consensus on the right level of urgency for
remediation priorities.
Exposing security vulnerabilities for a specific version of software at a specific point in time provides an opportunity
for a meaningful dialog with the third party software provider regarding remediation priorities. The security analysis
provider managing the static binary analysis will deliver detailed information about defects and vulnerabilities to the
third party software supplier. Together, the security analysis provider and third party software supplier will determine
if there are any additional mitigating controls that may change the risk profile of the vulnerability.
The third party software provider chooses when to release the summary results to the enterprise within a few
weeks. Some third parties choose to remediate vulnerabilities and submit a new release of software after the
original scan. The summary results from the latest scan are then shared with the enterprise.
The vendors of binary static analysis make it easy to score the vulnerability results either using a letter grade
score based on the selected policy or with score based on five categories of risk. The most important aspect of this
process is to discuss the findings with the third party software supplier in order to agree on remediation priorities.
This discussion often provides insight into how the third party software vendor deals with software security risk
and remediation prioritization.
Financial institution Financial Third party Third party Security analysis Third party
introduces security institution submits software vendor software vendor provider publishes software vendor
analysis provider new third party accepts scanning uploads software results to third publishes summary
to the third party software vendor requirement for testing party software results to financial
software vendor request form vendor institution
Security analysis
Security analysis provider facilitates On-going support for the third party software vendor
provider creates
process for third party software scanning is provided by the security analysis provider
application profile
Historical strength in static binary analysis Level of support for systems integrators
Program management service dedicated to building and Scalability to handle large third parties
executing a vendor application security testing program
From Sonatype Press Release dated Sept 9 2013, “The composition of today’s applications is often as high as 90% open source components and
8
10% custom source code (Based on an analysis of the Central Repository and 1000+ Repository and Application Healthcheck Risk Assessments)”
Aspect Security, a software security consulting vendor, estimates that about 26% of the most common open source
components have high risk vulnerabilities in them.9 The more these open source components are shared, the more
widespread the vulnerabilities become. Therefore, it is essential to have a control to protect the flow of open source
components into the development process.
8,000
7,000
6,000
8 Billion
Requests in Millions
5,000
2012 Open Source Component Requests
4,000
3,000
2,000
1,000
0
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
When application developers seek to build new functionality to meet business needs, they turn to open source
libraries for access to components that dramatically improve the time to market of their delivery. The most
appropriate type of control for addressing the security vulnerabilities in open source, including older versions of
the open source, is one that addresses vulnerabilities before the code is deployed—i.e. by applying policy controls
in the acquisition and use of open source libraries by developers. Therefore a combination of using controlled
internal repositories to provision open source components and blocking the ability to download components
directly from the internet is necessary for managing risk. In fact, Gartner recommends that “if open source is
used, ensure that the frameworks and libraries used are legitimate and up-to-date, and that the compiler used
hasn’t been compromised.”10
There are several technology solutions that address part or most of the needed features to apply lifecycle
management controls for open source components. The Working Group has experience with three solutions that
offer partial functionality. Two of these vendor solutions have been available on the market for more than 5 years
(Palamida and Black Duck) and provide for legal liability as well as security of open source libraries once acquired.
They both have an ability to tag code components and libraries used within an application portfolio. Thus when
new vulnerabilities are discovered, the financial institution can more easily identify the impact of remediation by
understanding where all of the components exist within the application portfolio. This also applies to determining
any legal liability for the use of open source libraries.
9
Aspect Software “The Unfortunate Reality of Insecure Libraries” March 2012
10
Gartner, “Maverick*Research: Living in a World Without Trust: When IT’s Supply Chain Integrity and Online Infrastructure Get Pwned” October 2012
A new approach in the market is Component Lifecycle Management (CLM) which offers the ability to enforce
policies in the development process. For example, if a development team inadvertently downloads obsolete soft-
ware versions, CLM can apply a method of breaking the build when that library is submitted, enforcing the use of a
more current version. CLM informs the developers and security staff which components have risky vulnerabilities
and which ones do not. The benefits of this approach include:
• Accelerating the development process by encouraging the consumption of open source libraries
that are resilient.
• Reduce operating costs since the cost of ripping out obsolete components from existing applications
is high assuming the older versions can be identified in the first place.
Financial institutions should consider options in this control type to apply policies to the consumption of open
source components and to specify methods for creating and managing an inventory of open source libraries in
use within the application portfolio. There are manual options and automated options that should be considered
to improve the resiliency of the most commonly used open source components. The controls applied to the con-
sumption of open source are less expensive to implement than fixing defects after they are deployed in production
throughout the application portfolio for the financial institution. An analogy that may apply is the delivery of pure
water through our water systems, regardless of geography, is easier to implement when purification is applied at
the reservoir rather than the downstream canals, pipes and distribution method.
Firms should also encourage use of mature versions of software that are patched and not yet obsolete by applying
policies and enforcing them using the best methods available. The large consumption rate of open source libraries
for web and mobile applications offers compelling evidence of how time to market has been realized.
It is time to apply resiliency controls to the consumption process that will reduce the requirements to fix old
versions with vulnerabilities after they have been deployed. Controls should encourage deployment of current
versions that have been determined to be resilient. Providing more information to architects and developers is
the responsibility of the information security staff. The information should improve the understanding that policy
management applied early in the lifecycle will both cost less effort and speed up time to market in the long run.
Conclusion
Financial institutions must determine their own path for addressing third party software security. While imple-
menting all three recommended controls or even just one will significantly improve the resiliency of the application
portfolio, these controls must be incorporated within existing vendor governance programs in order to achieve
the maximum level of efficacy. When executed correctly, these recommended approaches will increase the
effectiveness of the risk management practices and enable avoidance of expensive remediation post-production.
As member firms better understand the risks associated with sharing critical data and systems with third parties,
the FS-ISAC Product & Services Committee will continue to refine third party software security control types either
through the Third Party Software Security Working Group or another effort.
This bank implemented the vBSIMM for the five largest providers of off-shore development services as part of
an initial project. The five firms were asked to volunteer to participate in vBSIMM assessments. Four of the firms
agreed to allow the bank to conduct a vBSIMM assessment. The fifth one decided that the services provided by
their firm to the bank did not qualify for a vBSIMM assessment since they were not responsible for determining the
software development process and instead followed the bank’s software development lifecycle (SDLC). The four
vendors acknowledged that they were aware of software security practices but the practices were not included in
their current projects. Their perception was that the bank would view the practices as additional work and would be
unwilling to pay for the increase in labor to implement the security controls. The vendors agreed to review each of
the five practices and consider methods for implementing controls within each practice area within a few months
and then ask the bank to begin the vBSIMM assessment. The bank approved of this approach and conducted the
vBSIMM assessments by introducing candidate activities for each practice area with each vendor so they under-
stood the activities available based on the BSIMM framework. The vendors then worked on implementation of the
activities they selected.
The sessions themselves were often conducted as presentations of the activities in each respective practice area
with the assessor asking questions of the vendor. The vendor selected product development or development leaders
and architects to participate in the vBSIMM assessment. In a few cases the vendor assigned an information security
officer to the project to oversee the implementation of the controls and/or review the assessment results. Each
vendor scored at least the minimum score of level 1 for each practice area that applied. A few scored level 2 and
level 3 for select practice areas. All of the vendors identified the appropriate controls by practice area and implement-
ed the controls on current development projects for the bank by the time the initial vBSIMM project was completed.
The vendors found they could implement the controls without changing their billing or rates of the current projects
and they made a commitment to the bank to incorporate these practices in all future assignments for the bank.
The bank included additional types of vendors to the initial project work to evaluate the effectiveness of the
vBSIMM and use the lessons learned to determine if modifications in the approach or techniques was necessary.
The bank selected vendors from two more categories of vendors, commercial off the shelf software (COTS)
vendors and providers that host Software as a Service (SaaS) offerings. They requested volunteers from both
COTS and SaaS vendors and included approximately ten vendors from each category.
The results from the COTS category of vendors were quite diverse. This category included vendors that were famil-
iar with and had adopted the BSIMM model so their knowledge of software security was significantly more mature.
However, several larger software vendors chose not to provide assessment artifacts based on advice from their
legal departments. In addition, several smaller COTS vendors were introduced to the BSIMM model through this
initial project and the assessment results indicated very low maturity.
The SaaS providers also demonstrated diverse results with several measuring at very high maturity in the vBSIMM
assessment while others had little or no security controls in their development process and limited understanding
of the activities in the each of the respective practices. Several of these providers found that participation in this
process provided additional value as they learned a great deal about techniques, tools and practices related to
developing secure software.
The next most significant learning was the need to gather more information from the vendor prior to the assess-
ment session and to make it easier for vendor governance teams to both collect and interpret the information
provided by the vendors. The bank decided to add specific vBSIMM questions by practice area into the vendor
questionnaire11 (often referred to as a Standard Information Gathering tool or SIG) and to score the results from
the responses making it easier for vendor governance professionals to understand how to assess maturity. The
bank anticipated having to include information security professionals that understood software security in the
process but wanted to rely on the vendor governance staff for most of the support work required.
A sample of the questionnaire used by the bank is available in the Appendix 2. For an Excel version of this questionnaire,
10
The bank has implemented the vBSIMM process for selected vendors based on the types of services they offer and
the application risk. For example they apply the vBSIMM for all hosting vendors, for selected COTS products based
on an application risk classification and for service providers that manage the development process following their
own SDLC. The bank provides specific guidance to vendor governance professionals on how to determine the most
appropriate way to apply the vBSIMM when a vendor has different SDLCs by product or where there are different
development teams.
Vendor
Single
Assess-
ment
Recent
U.S. Based Overseas
Acquisition
Development Development
Acquired
Group Group Development
Some COTS vendors will acquire other products and over time will migrate the development processes for the
acquired company to the acquirer’s core development process. Often the original product development process
is in place for a year or longer post acquisition creating a need for vBSIMM assessment for each product unless
the product development process is identical (same phases, techniques, controls, etc.). Also a services vendor
may use different development centers around the world taking advantage of the market for technical talent (the
same way a bank would) and therefore follow different development practices. In this case a vBSIMM would be
done for each geographic development center since the development practices vary by center.
The lessons learned from the bank’s initial projects were significant in influencing adjustments to the implementa-
tion approach going forward. This included an acknowledgement that a gulf existed between software vendors’
interpretation their responsibilities and the bank’s. The bank has continued the iterative process of refining the
vBSIMM assessment process within their organization and as it applies to their software suppliers.
TPRM Summary
Application Development
Overview (vBSIMM)
Name of the application
What does the application
do for our firm?
Application inventory ID
Risk classification (H, M, L)
Type of application (Web,
Mobile, Client/Server, etc.)
Application development
language(s) (Java, .NET,
iOS, etc.)
Architecture
How do you identify A formalized process is more
the most critical consistent than an arbitrary approach.
applications/products Validate the approach to ensure that
for identifying risk? high risk apps are identified using
sound methodology (are there high
risk apps not being identified?)
Architecture
Do perform a secure Security evaluations for every major
architecture design review release demonstrates a high level of
for high risk applications? maturity. Combined with additional
security monitoring may be effective
at mitigating risk.
Development
Do you have a list of An affirmative response is indicative of
the most common some level of maturity. Ask questions
vulnerabilities/bugs that to understand what they do and assign
need to be eliminated? a level of maturity.
Development
How many applications do Commercially available code review
you perform secure code analyzers or a 3rd party evaluation
review for annually? service should be used as part of a
comprehensive software security
practice. A dedicated software security
group should be considered to drive/
manage the process. Understand the
process for re-evaluation once initially
identified issues are remediated. Man-
ual code reviews are not sustainable
for a portfolio this size. Low developer
counts (less than 100) could indicate
outsourced development.
QA/UAT
Does your QA function A negative response is indicative
execute edge/boundary of a control gap. Ask questions to
value condition testing? understand what level the vendor
does do in this space and create
an RP if necessary.
QA/UAT
Do you use dynamic Security evaluations for every major
scanning against web apps release demonstrates a high level of
while in the QA phase? maturity. Combined with additional
security monitoring may be effective
at mitigating risk.
Penetration Testing
How often do you perform Security evaluations for every major
pen testing of applications release demonstrates a high level of
(not perimeter pen testing)? maturity. Combined with additional
security monitoring may be effective
at mitigating risk.
Penetration Testing
If internal, are the pen A negative response is indicative
testers part of the of a control gap. Ask questions to
development group? understand what level the vendor
does do in this space and create
an RP if necessary.
Production
Is vulnerability/security An affirmative response is indicative of
information found in some level of maturity. Ask questions to
operations or production understand what they do and assign a
shared with developers? level of maturity.
Describe how.
Earlier in the lifecycle, preventative controls are most effective. As any application migrates to production, detective
controls become more important. Some vendors may rely entirely on Pen Testing as a SDLC control. While this can be
effective in the detection of vulnerabilities, it does nothing to prevent issues from being reintroduced (unless shared with
the developers who introduced the issue). Understanding who performs the pen test is important (are they qualified?).
A higher number of releases amplify the need for detective controls. Mature programs will contain a mixture of preventa-
tive and detective controls for every release ensuring that developer education is addressed. Vendor attestation is never
enough; always verify artifacts that support vendor responses. Lack of artifacts or practices may require a “point in time
assessment” to measure the security posture of a given application. Architecture focuses primarily on preventative
controls. Applications from those vendors that score less than the min mature score in development, QA/UAT, and
Pen Testing may require RPs and/or point in time assessments (such as Binary, Dynamic or a Pen Test).
Architecture
AA1.1 Activity Response IRM
Comment
Threat modeling allows you to systematically Architecture
identify and rate the threats that are most Analysis Activity
likely to affect your system. By identifying and
Perform security
rating threats based on a solid understanding
design/architecture/
of the architecture and implementation of
feature review
your application, you can address threats
with appropriate countermeasures in a logical
order, starting with the threats that present
the greatest risk.
Threat modeling has a structured approach
that is far more cost efficient and effective
than applying security features in a haphaz-
ard manner without knowing precisely what
threats each feature is supposed to address.
Architecture
How do you identify the most critical
applications/products for identifying risk?
Development
CR1.4 Activity Response IRM
Comment
Source code review is one of the critical Code Review Activity
controls. Security code reviews focus on
Use automated
identifying insecure coding techniques
tools along with
and vulnerabilities that could lead to security
manual review
issues. The cost and effort of fixing security
flaws at development time is far less
than fixing them later in the product
deployment cycle.
The use of an automated tool demonstrates
maturity in the practice since the tools are
much more mature today and make the
review process more consistent. Managing
false/positives from a source code tool is
necessary for large scale development work
and requires expertise and effective practices.
For example, using a process or function to
interpret vulnerability information or reducing
the number of rules in the baseline rule set are
both techniques for managing false/positives.
Using a manual code review process for a
small team may be effective as long as there is
an experienced software security professional
conducting the review. Manual code review is
required for platforms not covered through
source code static analysis tools.
Development
Do you have a list of the most common
vulnerabilities/bugs that need to be
eliminated?
Development
How many applications do you perform
secure code review for annually?
Do you outsource any development?
(Provide the name of the company and
geographic location.)
QA/UAT
ST1.1 Activity Response IRM
Comment
The QA team goes beyond functional testing Security Testing
to perform basic adversarial tests. They Activity
probe simple edge cases and boundary
Ensure QA supports
conditions and no attacker skills required to
edge/boundary
do this. A minimalistic practice is to conduct
value condition
specific tests designed to uncover potential
testing.
input/output vulnerabilities in an application.
Test scripts used or output of tests designed
to do edge/boundary condition testing may
be considered.
QA/UAT
Does your QA function execute edge/
boundary value condition testing?
QA/UAT
If no, is there any form of black box testing or
are there scripts specific to abuse cases that
are used?
Penetration Testing
PT1.1 Activity Response IRM
Comment
Penetration Testing is a conventional security Penetration Testing
control and the one most widely used by Activity
software vendors.
Use penetration
testers to find
problems.
Penetration Testing
How often do you perform pen testing of
applications (not perimeter pen testing)?
Who performs the pen tests?
If internal, are the pen testers part of the
development group?
Do you use the same approach (tools,
methods, time spent, etc.) on each
application pen test?
Penetration Testing
Do you test the complete production version of
the application (not just certain components)?
Do you currently have any unremediated Pen
Test issues in the application under review?
Do you pen test applications while
authenticated?
Is the pen testing environment production
or production like?
Production
CMVM1.1 Activity Response IRM
Comment
This is often an initial point of identification Configuration
of software vulnerabilities for less mature Management—
software security programs. When an incident Incident Response/
is identified, what process is used to address Vulnerability
the incident and what is the notification Management
process with clients. Does the incident
response process drive prevention activities?
Production
Is vulnerability/security information found
in operations or production shared with
developers? (Describe how.)
Conclusion
Application Development
Overview (vBSIMM)
Name of the application
What does the application
do for our firm?
ID
Risk Ranking (H, M, L)
Type of application (Web,
Mobile, Client/Server, etc.)
Application development
language(s) (Java, .NET,
iOS, etc.)
Assessment Date/Location
Enterprise Assessor(s)
Conclusion
Artifacts Reviewed
RPs to be Created
Mark Connelly is Chief Information Security Officer Mahi Dontamsetti is Global head of IT Risk and
for Thomson Reuters, reporting to James Powell, Application Security at Depository Trust and Clearing
the company’s CTO. Mark joined the company on Corporation (DTCC). Mahi joined DTCC from Barclays
February 15, and is responsible for establishing Capital, where he was the Global Head of Access Man-
and maintaining a corporate- wide information risk agement & Entitlements. He has served on the board
management program; identifying, evaluating and of OWASP NY/NJ chapter and is author of several
reporting on information security risks that meet books on Wireless and Information Security. He has a
compliance and regulatory requirements; and M.S. in Computer Science and Telecommunications.
working with the business units to implement
practices that meet defined policies and standards
for information security.
Paul Fulton, CISSP
Citigroup
Mark most recently served as Chief Information
Security Officer for ITT Corporation. Prior to that, Paul Fulton is head of Information Security Core
he was Managing Director, Information Technology Services at Citi. He oversees several global security
Risk and Security for Credit Suisse and Vice President programs including the Data Protection, Secure
and Chief Information Security Officer for Sun Micro- Development Lifecycle, Third Party Assessment
systems. He began his career in systems engineering, and Application Security Management functions.
progressing through a variety of management roles in Previously Paul has held management positions
technical operations and information technology. in information security at UBS, Deutsche Bank,
and JP Morgan. His IT career started in application
Mark earned his Master of Science degree in Electrical development for trading floor systems and he
Engineering from Washington University in St. Louis, has held positions managing security architecture,
Missouri. He earned his Master of Arts degree in Micro- security infrastructure deployment and as lead
biology from the University of Missouri. His Bachelor information security officer for major business units.
of Arts degree is also from Washington University. As a
Certified Information Security Manager, he also holds
certifications in Governance of Enterprise IT and in
Risk and Information Systems Control.
A diversified Fortune 200 company headquartered During his tenure with JP Morgan Chase, Richard
in McLean, Virginia, Capital One is the ninth largest has had significant participation in several strategic
bank in the United States, based on deposits, and initiatives including the deployment of a global
one of the most widely recognized brands in America. application security program. Also noteworthy is
his participation in deploying one of the first online
Before joining Capital One, Mr. Gordon was an Ex- banking solutions (Wingspanbank.com), and the
ecutive with Bank of America leading the Security, design of a contactless payments solution focused
Authentication, Identity and Fraud team. Prior to Bank on the transit industry, which led to a U.S. patent
of America, Mr. Gordon held several technology and being granted for one of his designs.
eCommerce related positions with both Fortune 500
and small firms. Richard is active in a variety of civic and cultural
organizations including Habitat for Humanity and
Mr. Gordon has been a resident of Charlotte, participation on several community boards. Outside
North Carolina for over 10 years but commutes of professional interests, he enjoys restoring classic
to Richmond, Virginia for his role with Capital One. automobiles and scuba diving.
A native of Indianapolis, Indiana, he completed his
undergraduate degrees in Marketing and Mathemat-
ics at Anderson University in Anderson, Indiana.