ICT system security notes
ICT system security notes
Lecturer Contact
Name: Martha Were
Cell Number: 0727911970
Email: marthawere@mmust.ac.ke
Date: Mondays 11:00-1:00PM
Venue: LBB 011
Unit Description
This unit covers the competencies required to provide ICT security. They include identification of security
threats, installation of security control measures, implementation of security measures, testing of system
vulnerability and monitoring of the security system.
Reference materials
Manufacturers manuals
TOPIC 1: IDENTIFY SECURITY THREATS
Content:
1.0 Definition of security threats
1.1 Categories of security threats
Internal
external
1.2 Importance of Computer Security to an Organization
1.3 Identification of Common threats
Fraud and theft
Employee sabotage
Loss of physical and infrastructure support
Malicious hackers and code
Industrial espionage
Threats to personal privacy
Natural Calamities
Cyber crime
1.4 Constraints to computer security
Cost
User responsibility
Integration challenges
Inadequate Assessment
Computer security is safety applied to computing devices such as computers and smartphones, as well as
computer networks such as private and public networks, including the whole Internet.
The field covers all the processes and mechanisms by which digital equipment, information and services are
protected from unintended or unauthorized access, change or destruction, and are of growing importance in line
with the increasing reliance on computer systems of most societies worldwide.
It includes physical security to prevent theft of equipment, and information security to protect the data on that
equipment.
Some important terms used in computer security are:
A security attack is the act or attempt to exploit vulnerability in a system.
Vulnerabilities are the gaps or weaknesses in a system that make threats possible and tempt
threat actors to exploit them.
Threats represent potential security harm to an asset when vulnerabilities are exploited.
Attacks are threats that have been carried out.
Security controls are the mechanisms used to control an attack.
Attacks can be classified into active and passive attacks.
Passive attacks – attacker observes information without interfering with information or flow of information.
He/she does not interfere with operation. Message content and message traffic is what is observed.
Active attacks – involves more than message or information observation. There is interference of traffic or
message flow and may involve modification, deletion or destruction. This may be
done through the attacker masquerading or impersonating as another user. There is denial or repudiation where
someone does something and denies later. This is a threat against authentication and to some extent integrity.
Data security is the protection of data & information from accidental or intentional disclosure to unauthorized
persons
Private data or information is that which belongs to an individual & must not be accessed by or disclosed to any
other person, without direct permission from the owner.
Confidential data or information – this is data or information held by a government or organization about
people. This data/information may be seen by authorized persons without the
knowledge of the owner. However, it should not be used for commercial gain or any other unofficial purpose
without the owner being informed.
Computer crime: Computer crime refers to any crime that involves a computer and a network.
Security goals
To retain a competitive advantage and to meet basic business requirements, organisations must endeavour to
achieve the following security goals
The Information Security Triad: Confidentiality, Integrity, Availability (CIA)
Confidentiality – protect information value and preserve the confidentiality of sensitive data. Information
should not be disclosed without authorization. Information the release of which is permitted to a certain section
of the public should be identified and protected against unauthorised disclosure.
Integrity – ensure the accuracy and reliability of the information stored on the computer systems. Information
has integrity if it reflects some real world situation or is consistent with real
world situation. Information should not be altered without authorisation. Hardware designed to perform some
functions has lost integrity if it does not perform those functions correctly. Software has lost integrity if it does
not perform according to its specifications. Communication channels should relay messages in a secure manner
to ensure that integrity. People should ensure the system functions according to the specifications.
Availability – ensure the continued availability of the information system and all its assets to legitimate users at
an acceptable level of service or quality of service. Any event that degrades
performance or quality of a system affects availability
External threats are risks that arise from outside the organization. They are often beyond the direct control of
the organization, making it essential to identify and prepare for them proactively.
Examples of External threats include:
Hackers
Viruses & Malware
Software, Music or Film pirates
Denial of Service Attacks
Fraudulent traders
Terrorists
Organized criminals
Money & Identity theft
Natural disasters
Sources of viruses.
1. Contact with contaminated systems:
If a diskette is used on a virus infected computer, it could become contaminated. If the same diskette is used on
another computer, then the virus will spread.
2. Use of pirated software:
Pirated software may be contaminated by a virus code or it may have been amended to perform some destructive
functions which may affect your computer.
3. Infected proprietary software:
A virus could be introduced when the software is being developed in laboratories, and then copied onto diskettes
containing the finished software product.
4. Fake games:
Some virus programs behave like games software. Since many people like playing games on computers, the
virus can spread very fast.
5.Freeware and Shareware:
Both freeware & shareware programs are commonly available in Bulletin board systems.
Such programs should first be used in controlled environment until it is clear that the program does not contain
either a virus or a destructive code.
6. Updates of software distributed via networks:
Viruses programs can be spread through software distributed via networks.
Symptoms of viruses in a computer system.
The following symptoms indicate the presence of a virus in your computer:
• Boot failure.
• Files & programs disappearing mysteriously.
• Unfamiliar graphics or messages appearing on the screen, e.g., the virus might flash a harmless message such
as “Merry Christmas” on the computer terminal.
• Slow booting.
• Gradual filing of the free space on the hard disk.
• Corruption of files and programs.
• Programs taking longer than usual to load.
• Disk access time seeming too long for simple tasks.
• Unusual error messages occurring more frequently.
• Frequent read/write errors.
• Disk access lights turning on for non-referenced devices.
• Computer hangs anytime when running a program.
• Less memory available than usual, e.g., Base memory may read less than 640KB.
• Size of executable files changing for no obvious reason
Control measures against viruses.
• Install up-to-date (or the latest) antivirus software on the computers.
• Restrict the movement of foreign storage media, e.g., diskettes in the computer room. If they have to be used,
they must be scanned for viruses.
• Avoid opening mail attachments before scanning them for viruses.
• Write-protect disks after using them.
• Disable floppy disk drives, if there is no need to use disks in the course of normal operation.
• Backup all software & data files at regular intervals.
• Do not boot your computer from disks which you are not sure are free from viruses.
• Avoid pirated software. If possible, use the software from the major software houses.
• Programs downloaded from Bulletin Boards & those obtained from computer clubs should be carefully
evaluated & examined for any destructive code.
2. UNAUTHORIZED ACCESS
Data & information is always under constant threat from people who may want to access it without permission.
Such persons will usually have a bad intention, either to commit fraud, steal the information & destroy or corrupt
the data. Unauthorized access may take the following forms:
• Eavesdropping:
This is tapping into communication channels to get information, e.g., Hackers mainly use eavesdropping to
obtain credit card numbers.
• Surveillance (monitoring):
This is where a person may monitor all computer activities done by another person or people.
The information gathered may be used for different purposes, e.g.,for spreading propaganda or sabotage.
• Industrial espionage:
Industrial espionage involves spying on a competitor so as to get or steal information that can be used to finish
the competitor or for commercial gain. The main aim of espionage is to get ideas on how to counter by
developing similar approach or sabotage.
• An employee who is not supposed to see some sensitive data gets it, either by mistake or design.
• Strangers who may stray into the computer room when nobody is using the computers.
• Forced entry into the computer room through weak access points.
• Network access in case the computers are networked & connected to the external world.
4. THEFT
The threat of theft of data & information, hardware & software is real. Some information is so valuable such that
business competitors or some governments can decide to pay somebody a fortune so as to steal the information
for them to use.
5.COMPUTER CRIMES
A computer crime is a deliberate theft or criminal destruction of computerized data.
• The use of computer hardware, software, or data for illegal activities, e.g., stealing, forgery, defrauding, etc.
• Committing of illegal acts using a computer or against a computer system.
Trespass.
• Trespass refers to the illegal physical entry to restricted places where computer hardware, software & backed
up data is kept.
• It can also refer to the act of accessing information illegally on a local or remote computer over a network.
Trespass is not allowed and should be discouraged.
Hacking.
Hacking is an attempt to invade the privacy of a system, either by tapping messages being transmitted along a
public telephone line, or through breaking security codes & passwords to gain unauthorized entry to the system
data and information files in a computer.
Piracy.
Piracy means making illegal copies of copyrighted software, data, or information either for personal use or for
re-sale.
Ways of reducing piracy:
Enact & enforce copyright laws that protect the owners of data & information against piracy.
Make software cheap enough to increase affordability.
Use licenses and certificates of authenticity to identify originals.
Set installation passwords that prevent illegal installation ofsoftware.
Fraud.
Fraud is the use of computers to conceal information or cheat other people with the intention of gaining money
or information. Fraud may take the following forms:
• Input manipulation:
Data input clerks can manipulate input transactions, e.g., they can create dummy (ghost) employees on the Salary
file or a ghost supplier on the Purchases file.
• Production & use of fake documents:
E.g., a person created an intelligent program in the Tax department that could credit his account with cents from
all the tax payers. He ended up becoming very rich before he was discovered. Fraudsters can either be employees
in the company or outsiders who are smart enough to defraud unsuspecting people.
Sabotage.
Sabotage is the illegal or malicious destruction of the system, data or information by employees or other people
with grudges with the aim of crippling service delivery or causing great loss to an organization.
Sabotage is usually carried out by discontented employees or those sent by competitors to cause harm to the
organization.
The following are some acts of saboteurs which can result in great damage to the computer centres:
• Using Magnets to mix up (mess up) codes on tapes.
• Planting of bombs.
• Cutting of communication lines.
Alteration
Alteration is the illegal changing of stored data & information without permission with the aim of gaining or
misinforming the authorized users.
Alteration is usually done by those people who wish to hide the truth. It makes the data irrelevant and unreliable.
Alteration may take place through the following ways:
• Program alteration:
This is done by people with excellent programming skills. They do this out of malice or they may liaise with
others for selfish gains.
• Alteration of data in a database:
This is normally done by authorized database users, e.g., one can adjust prices on Invoices, increase prices on
selling products, etc, and then pocket the surplus amounts.
Audit trails
This is a careful study of an information system by experts in order to establish (or, find out) all the weaknesses
in the system that could lead to security threats or act as weak access points for criminals.
An audit of the information system may seek to answer the following questions: –
1. Is the information system meeting all the design objectives as originally intended?
2. Have all the security measures been put in place to reduce the risk of computer crimes?
3. Are the computers secured in physically restricted areas?
4. Is there backup for data & information of the system that can ensure continuity of services even when
something serious happens to the current system?
6. What real risks face the system at present or in future?
Data encryption
Data being transmitted over a network faces the dangers of being tapped, listened to, or copied to unauthorized
destinations.
To protect such data, it is mixed up into a form that only the sender & the receiver can be able to understand by
reconstructing the original message from the mix. This is called Data encryption.
The flow diagram below shows how a message can be encrypted and decrypted to enhance security.
The message to be encrypted is called the Plain text document. After encryption using a particular order (or,
algorithm) called encryption key, it is sent as Cyphertext on the network. After the recipient receives the
message, he/she decrypts it using a reverse algorithm to the one used during encryption called decryption key
to get the original plain text document.
This means that, without the decryption key, it is not possible to reconstruct the original message
Log files.
These are special system files that keep a record (log) of events on the use of the computers and resources of
the information system.
Each user is usually assigned a username & password or account. The information system administrator can
therefore easily track who accessed the system, when and what they did on the system.
This information can help monitor & track people who are likely to violate system security policies.
Firewalls
A Firewall is a device or software system that filters the data & information exchanged between different
networks by enforcing the access control policy of the host network.
A firewall monitors & controls access to or from protected networks. People (remote users) who do not have
permission cannot access the network, and those within cannot access sites outside the network restricted by
firewalls.
COMPUTER SECURITY.
What is Computer security?
• Safeguarding the computer & the related equipment from the risk of damage or fraud.
• Protection of data & information against accidental or deliberate threats which might cause unauthorised
modification, disclosure, or destruction.
A computer system can only be claimed to be secure if precautions are taken to safeguard it against damage
or threats such as accidents, errors & omissions.
The security measures to be undertaken by the organization should be able to protect:
1. Computer hardware against damage.
2. Data, information & programs against accidental alteration or deletion.
Fire
Fire destroys data, information, software & hardware.
Security measures against fire:
• Use fire-proof cabinets & lockable metal boxes for floppy disks.
• Use of backups.
• Install firefighting equipment, e.g., fire extinguishers.
• Have some detectors.
• Training of fire-fighting officers.
• Observe safety procedures, e.g., avoid smoking in the computerrooms.
• Have well placed exit signs.
• Contingency plans
• Terrorist attack.
This includes activities such as:
• Political terrorists,
• Criminal type of activities,
• Individuals with grudges, or
• People intending to cause general destruction.
Security measures:
• Hiring of security guards to control physical access to the building housing the computer room.
• Activities that can cause terrorism should be avoided, e.g., exploitation of workers.
• Have double door & monitoring devices.
• Use of policies.
• System auditing / use of log files.
• Use of passwords.
• Punitive measures.
• Encryption of data.
• Use of firewalls.
• Consult & co-operate with the Police and Fire authorities on potential risks.
People threats include:
• Accidental deletion of data, information or programs.
• Vandalism, i.e., theft or destruction of data, information or programs & hardware.
• Piracy of copyrighted data & software.
2. Computer viruses:
A computer virus destroys all the data files & programs in the computer memory by interfering with the
normal processes of the operating system.
Precautions against computer viruses:
1. Anti-virus software.
Use Antivirus software to detect & remove known viruses from infected files.
Some of the commonly used Antivirus software are: Dr. Solomon’s Toolkit, Norton Antivirus, AVG
Antivirus, PC-Cillin, etc
NB: The best way to prevent virus is to have a memory-resident antivirus software, which will detect the
virus before it can affect the system. This can be achieved by installing a GUARD program in the RAM
every time the computer boots up. Once in the RAM, the antivirus software will automatically check
diskettes inserted in the drives & warn the user immediately if a disk is found to have a virus.
• For an antivirus to be able to detect a virus, it must know its signature. Since virus writers keep writing new
viruses with new signatures all the time, it is recommended that you update your antivirus product regularly
so as to include the latest virus signatures in the industry.
• The Antivirus software installed in your computer should be enabled/activated at all times.
• You should also perform virus scans of your disks on a regular basis.
• Evaluate the security procedures to ensure that the risk of future virus attack is minimized.
1. Use of Backups.
All data must be backed up regularly. In addition, all application programs & operating system software
should also be kept safely so that in case of a complete system crash, everything can be reinstalled/restored.
1. Use of Recovery tools.
System tools such as Norton Utilities, PC Tools, QAPlus, etc can be used to revive a disk that has crashed.
Unauthorised access:
Unauthorised access refers to access to data & information without permission.
Computer criminals can do the following harms:
• Steal large amounts of funds belonging to various companies by transferring them out of their computer
accounts illegally.
• Steal or destroy data & information from companies, bringing their operations to a standstill.
• Spread destruction from one computer to another using virus programs. This can cripple the entire system of
computer networks.
• Spread computer worm programs. Worm programs are less harmful in the beginning, but render the computer
almost useless in the long-run.
The 7 layers of cyber security should centre on the mission critical assets you are seeking to
protect.
1: Mission Critical Assets – This is the data you need to protect
2: Data Security – Data security controls protect the storage and transfer of data.
3: Application Security – Applications security controls protect access to an application, an
application’s access to your mission critical assets, and the internal security of the
application.
4: Endpoint Security – Endpoint security controls protect the connection between devices and
the network.
5: Network Security – Network security controls protect an organization’s network and
prevent unauthorized access of the network.
6: Perimeter Security – Perimeter security controls include both the physical and digital
security methodologies that protect the business overall.
7: The Human Layer – Humans are the weakest link in any cyber security posture. Human
security controls include phishing simulations and access management controls that protect
mission critical assets from a wide variety of human threats, including cyber criminals,
malicious insiders, and negligent users.
2. User responsibility
Explanation:
Users play a critical role in maintaining security by following best practices, such as creating strong passwords,
not sharing credentials, and being cautious about phishing attempts. However, user negligence or lack of
awareness can pose significant security risks.
Users may unintentionally compromise security through actions like clicking on malicious links, downloading
infected files, or using weak passwords.
Impact:
Even with advanced security measures in place, a single careless or uninformed user action can lead to security
breaches. Education and training programs are essential to improve user awareness and responsibility.
Mitigation:
Conduct regular security awareness training for users.
Implement strong authentication mechanisms and enforce password policies.
Foster a security-conscious culture within the organization.
3. Integration challenges
Explanation:
Many organizations use a variety of hardware, software, and services from different vendors. Ensuring seamless
integration and compatibility among these diverse components can be challenging.
Integration difficulties may arise when trying to implement a unified security framework that covers various
platforms and technologies.
Impact:
Incomplete integration may result in security gaps or blind spots. It could hinder the organization's ability to
detect and respond to security incidents across its entire technology landscape.
Mitigation:
Choose security solutions that are designed for interoperability.
Implement a comprehensive security architecture that can adapt to diverse technologies.
Regularly update and patch systems to ensure compatibility.
4. Inadequate Assessment
Explanation:
Regular assessments and audits are crucial for identifying vulnerabilities and weaknesses in an organization's
security infrastructure. Inadequate or infrequent assessments can result in a lack of awareness about potential
risks.
Organizations may underestimate the evolving nature of cyber threats and fail to keep their security measures up
to date.
Impact:
Without proper assessments, organizations may not be aware of vulnerabilities until after a security incident
occurs. This can lead to unauthorized access, data breaches, or disruptions to services.
Mitigation:
Conduct regular security assessments and penetration testing.
Stay informed about the latest cybersecurity threats and vulnerabilities.
Implement a continuous monitoring and incident response program.
Policy:
The policy statement should specify the following:
The organization's goals on security. For example, should the system protect data from leakage to
outsiders, protect against loss of data due to physical disaster, protect the data's integrity, or protect
against loss of business when computing resources fail? What is the higher priority: serving customers or
securing data?
Where the responsibility for security lies. For example, should the responsibility rest with a small
computer security group, with each employee, or with relevant managers?
The organization's commitment to security. For example, who provides security support for staff, and
where does security fit into the organization's structure?
Timetable:
A comprehensive security plan cannot be executed instantly. The security plan includes a timetable that shows
how and when the elements of the plan will be performed. These dates also give milestones so that management
can track the progress of implementation.
Continuing Attention:
Good intentions are not enough when it comes to security. We must not only take care in defining requirements
and controls, but we must also find ways for evaluating a system's security to be sure that the system is as secure
as we intend it to be. Thus, the security plan must call for reviewing the security situation periodically. As users,
data, and equipment change, new exposures may develop. In addition, the current means of control may become
obsolete or ineffective (such as when faster processor times enable attackers to break an encryption algorithm).
The inventory of objects and the list of controls should periodically be scrutinized and updated, and risk analysis
performed anew.
Security Planning Team Members:
The membership of a computer security planning team must somehow relate to the different aspects of computer
security described in this book. Security in operating systems and networks requires the cooperation of the
systems administration staff. Program security measures can be understood and recommended by applications
programmers. Physical security controls are implemented by those responsible for general physical security, both
against human attacks and natural disasters. Finally, because controls affect system users, the plan should
incorporate users' views, especially with regard to usability and the general desirability of controls. Thus, no
matter how it is organized, a security planning team should represent each of the following groups.
Computer hardware group
System administrators
Systems programmers
Applications programmers
Data entry personnel
Physical security personnel
Representative users
In some cases, a group can be adequately represented by someone who is consulted at appropriate times, rather
than a committee member from each possible constituency being enlisted.
Assuring Commitment To a security plan:
After the plan is written, it must be accepted and its recommendations carried out. Acceptance by the
organization is key; a plan that has no organizational commitment is simply a plan that collects dust on the shelf.
Commitment to the plan means that security functions will be implemented and security activities carried out.
Three groups of people must contribute to making the plan a success.
The planning team must be sensitive to the needs of each group affected by the plan.
Those affected by the security recommendations must understand what the plan means for the way they
will use the system and perform their business activities. In particular, they must see how what they do
can affect other users and other systems.
Management must be committed to using and enforcing the security aspects of the system
Management commitment is obtained through understanding. But this understanding is not just a function of
what makes sense technologically; it also involves knowing the cause and the potential effects of lack of
security. Managers must also weigh trade-offs in terms of convenience and cost. The plan must present a picture
of how cost effective the controls are, especially when compared to potential losses if security is breached
without the controls. Thus, proper presentation of the plan is essential, in terms that relate to management as well
as technical concerns.
Management is often reticent to allocate funds for controls until the value of those controls is explained. As we
note in the next section, the results of a risk analysis can help communicate the financial tradeoffs and benefits of
implementing controls. By describing vulnerabilities in financial terms and in the context of ordinary business
activities (such as leaking data to a competitor or an outsider), security planners can help managers understand
the need for controls.
The plans we have just discussed are part of normal business. They address how a business handles computer
security needs. Similar plans might address how to increase sales or improve product quality, so these planning
activities should be a natural part of management. Next we turn to two particular kinds of business plans that
address specific security problems: coping with and controlling activity during security incidents.
Business Continuity Plan:
A business continuity plan documents how a business will continue to function during a computer security
incident. An ordinary security plan covers computer security during normal times and deals with protecting
against a wide range of vulnerabilities from the usual sources.
A business continuity plan deals with situations having two characteristics:
Catastrophic situations, in which all or a major part of a computing capability is suddenly unavailable
Long duration, in which the outage is expected to last for so long that business will suffer. There are
many situations in which a business continuity plan would be helpful. Here are some examples that typify
what you might find in reading your daily newspaper:
A fire destroys a company's entire network.
A seemingly permanent failure of a critical software component renders the computing system
unusable.
A business must deal with the abrupt failure of its supplier of electricity, telecommunications,
network access, or other critical service.
A flood prevents the essential network support staff from getting to the operations center.
The key to coping with such disasters is advance planning and preparation, identifying activities that will keep a
business viable when the computing technology is disabled.
The steps in business continuity planning are these:
Assess the business impact of a crisis.
Develop a strategy to control impact. Develop and implement a plan for the strategy
Incident response plan: Incident response Plan should be
define what constitutes an incident
identify who is responsible for taking charge of the situation
describe the plan of action
While defining the problem, research areas that will help confirm its business impact. Describe the who, what,
when, where, and why of the issue. Clearly outline the expected outcome, often called a "should be" statement.
For example, in a corrective action request form, if you receive the wrong batch of parts from a supplier, the
comparable statements might be: "We received steel parts," and "the parts should be made of copper."
If you find it hard to define the expected outcome, it may not be worthy of a corrective action plan.
For example, suppose it impacts your organization's entire supply chain or happens every day, like a hazardous
leak in a pipeline. It might be more crucial to address this than if the issue occasionally occurs or impacts just
one particular transaction.
Address every problem, but prioritize it based on its scope to give you an idea of whether it requires immediate
attention.
Implement containment actions to take care of the most pressing symptoms. Perform checks and measures to
catch and fix the surface-level issues while your team addresses the source of the problem.
So, it's essential to be cautious and use established root cause analysis techniques to ensure that you have
correctly identified the underlying issue.
Popular techniques include the "5 Whys" method, which involves asking "why" five times, and the more
complex Ishikawa or fishbone diagram.
Step 5: Plan Corrective Actions to Fix the Root Cause
Once you have detected the root cause of the problem, it is time to create a plan to address it.
Create SMART (specific, measurable, achievable, realistic, and time-bound) goals and allot feasible deadlines.
Make sure these goals or solutions are centered around the root cause, detailing every step necessary to eliminate
the underlying cause of a problem.
Depending on the extent of the problem, you may also need to provide a cost and return on investment analysis
and get formal management approval for funding before you start the corrective action procedure.
Make your corrective action plan more manageable by providing a list of who will be responsible, how they
should report their progress, and to whom. Also, note anticipated due dates and time frames that they should
keep in mind while reporting.
Corrective actions may be as simple as replacing a faulty piece of equipment or updating old software. The CAP
can also involve more complex processes like hiring and training outside consultants to manage risks.
Be thorough with every aspect of your corrective action plan and regularly communicate your progress with all
the relevant stakeholders.
Follow up after an appropriate time to check that the corrective action plan resolved the problem. If not, dig
deeper and repeat the process until you address all the underlying causes of the problem. Continue documenting
any lessons learned to help address similar issues in the future.
Physical components of your information system and the environment in which the information system
is housed.
Applications and software, including security patches your systems administrators, have already
implemented.
Network vulnerabilities, including public and private access and firewall configurations.
The human dimension, including how employees collect, share, and store highly sensitive information.
The organization’s overall security strategy, including security policies, organization charts, and risk
assessments.
Security audits come in two forms, internal and external audits that involve the following procedures:
Internal audits. In these audits, a business uses its own resources and internal audit department. Internal
audits are used when an organization wants to validate business systems for policy and procedure
compliance.
External audits. With these audits, an outside organization is brought in to conduct an audit. External audits
are also conducted when an organization needs to confirm it is conforming to industry standards or
government regulations.
There are many ways to improve your information security posture. Still, this standard provides a framework of
best practices that can make it easier for your organization to identify, analyze, and manage the risks of your
information assets.
The PCI Security Standards Council (PCI SSC) maintains the PCI DSS, the de facto global standard for
organizations that handle credit card information. The PCI DSS also applies to organizations that store, process,
or transmit any cardholder data, which includes the following: Name, address, and Social Security number
(SSN).
The NIST CSF is a voluntary, risk-based approach to cybersecurity and offers flexible and repeatable processes
and controls tailored to an organization’s needs. The NIST CSF is a set of standards and guidelines that federal
agencies can use to comply with the Federal Information Security Modernization Act (FISMA).
4. SOC2
SOC 2 is an auditing procedure that ensures your service providers securely manage your data to protect the
interests of your organization and the privacy of its clients. This compliance is necessary to meet the standards of
your organization’s clients and to stay compliant with the industry standards.
SOC 2 compliance ensures the security of your company’s information assets and protects the interests of your
organization. It is a certification of trust, which says that your company protects the type of information that is
considered personal and private. SOC 2 is one of the most widely used standards for third-party service
providers, and is an absolute must for any organization that is looking to be compliant with the industry
standards.
5. HIPAA
The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that requires covered entities
to protect the confidentiality, integrity, and availability of electronic health information that they create, receive,
maintain, or transmit.
HIPAA protects the privacy and security of health information and sets national standards for how health care
providers, health plans, and health care clearinghouses and their business associates must work together and with
covered entities to ensure the safety and privacy of personal health information.
How to perform a Security audit to identify security gaps
Performing a security audit is essential to identify security gaps and vulnerabilities within an organization's
systems, processes, and policies.
1. Define Scope and Objectives: Determine the scope of the security audit, including the systems, networks,
applications, and policies that will be assessed. Clearly define the objectives of the audit, such as identifying
vulnerabilities, assessing compliance with security standards, or evaluating the effectiveness of security controls.
2. Gather Information: Collect relevant documentation, including security policies, procedures, network
diagrams, system configurations, and previous audit reports. Obtain information about the organization's IT
infrastructure, including hardware and software assets, network architecture, and data flow.
3. Risk Assessment: Conduct a comprehensive risk assessment to identify potential threats, vulnerabilities, and
risks to the organization's assets. Evaluate the likelihood and impact of security incidents and prioritize areas for
further investigation based on the level of risk.
4. Compliance Check: Assess compliance with relevant laws, regulations, and industry standards (such as GDPR,
HIPAA, PCI DSS, ISO 27001, etc.) to ensure that the organization's security practices meet legal and regulatory
requirements.
5. Technical Testing: Perform technical testing to identify vulnerabilities and weaknesses in the organization's
systems and networks. This may include vulnerability scanning, penetration testing, configuration audits, and
web application security testing.
6. Physical Security Assessment: Evaluate physical security measures such as access controls, surveillance
systems, and environmental controls (e.g., temperature, humidity) to prevent unauthorized access to sensitive
areas and assets.
7. Policy and Procedure Review: Review security policies, procedures, and guidelines to ensure they are
comprehensive, up-to-date, and effectively enforced. Evaluate adherence to security best practices and assess the
effectiveness of security awareness training programs.
8. Interviews and Observations: Conduct interviews with key stakeholders, system administrators, and end-users
to gather insights into security practices, challenges, and concerns. Observe security practices in action to
identify potential weaknesses or areas for improvement.
9. Documentation Review: Review documentation related to incident response procedures, security incident logs,
change management records, and audit trails to assess the organization's ability to detect, respond to, and recover
from security incidents.
10. Analysis and Reporting: Analyze the findings from the security audit, including identified vulnerabilities,
compliance issues, and security gaps. Prepare a comprehensive audit report that outlines the findings,
recommendations for remediation, and an action plan for addressing identified security issues.
11. Remediation Planning: Develop a remediation plan that prioritizes security issues based on their severity and
potential impact on the organization. Assign responsibilities for implementing remediation actions and establish
timelines for completion.
12. Follow-Up and Verification: Monitor the implementation of remediation actions and verify that security issues
are effectively addressed. Conduct follow-up assessments and periodic reviews to ensure ongoing compliance
with security standards and continuous improvement of security posture.
2. Table of Contents
The table of contents is an essential part of the audit reports. They provide a quick and convenient way to view
the most important information in the report.
The table of contents is especially useful in large and detailed audit reports. It helps to quickly locate any
detailed information, such as the auditor’s name, the scope of the audit, the date of the audit, and the number of
pages in the audit report.
3. Scope of Audit
Scope of audit refers to a broad description of what is included in a project or the scope of a contract. In the
scope of work, the project manager and other stakeholders identify the work needed to accomplish the project
purpose.
4. Description
The description section in the security audit report is the detailed technical description of the security risk. The
description contains:
5. Recommendations
The recommendation section contains details about the fix or patch that needs to be done to mitigate the security
risk. Here, the fix depends on the type of security vulnerability.
For Example, Developers can mitigate an XSS by escaping or encoding characters and using a WAF. But, the
XSS can be prevented by not using the outdated version of jQuery.
6. References
Cite sources of information, including regulations, standards, guidelines, and industry best practices
referenced in the audit report.
Topic content:
Definition of vulnerability
System testing schedule
Levels of system vulnerability
Ethical penetration
System vulnerability test report
Definition of vulnerability
Vulnerability in security refers to a weakness or opportunity in an information system that cybercriminals can
exploit and gain unauthorized access to a computer system. Vulnerabilities weaken systems and open the door to
malicious attacks.
What Is the Difference Between Vulnerability and Risk?
Vulnerabilities and risks differ in that vulnerabilities are known weaknesses. They’re the identified gaps that
undermine the security efforts of an organization’s IT systems.
Risks, on the other hand, are potentials for loss or damage when a threat exploits vulnerability.
1. Network vulnerabilities are weaknesses within an organization’s hardware or software infrastructure that
allow cyberattackers to gain access and cause harm. These areas of exposure can range from poorly-protected
wireless access all the way to misconfigured firewalls that don’t guard the network at large.
2. Operating system (OS) vulnerabilities are exposures within an OS that allow cyberattackers to cause
damage on any device where the OS is installed. An example of an attack that takes advantage of OS
vulnerabilities is a Denial of Service (DoS) attack, where repeated fake requests clog a system so it becomes
overloaded. Unpatched and outdated software also creates OS vulnerabilities, because the system running the
application is exposed, sometimes endangering the entire network.
3. Process vulnerabilities are created when procedures that are supposed to act as security measures are
insufficient. One of the most common process vulnerabilities is an authentication weakness, where users, and
even IT administrators, use weak passwords.
4. Human vulnerabilities are created by user errors that can expose networks, hardware, and sensitive data to
malicious actors. They arguably pose the most significant threat, particularly because of the increase in
remote and mobile workers. Examples of human vulnerability in security are opening an email attachment
infected with malware, or not installing software updates on mobile devices.
1. Human error – When end users fall victim to phishing and other social engineering tactics, they become one
of the biggest causes of vulnerabilities in security.
2. Software bugs – These are flaws in a code that cybercriminals can use to gain unauthorized access to
hardware, software, data, or other assets in an organization’s network. sensitive data and perform
unauthorized actions, which are considered unethical or illegal.
3. System complexity – When a system is too complex, it causes vulnerability because there’s an increased
likelihood of misconfigurations, flaws, or unwanted network access.
4. Increased connectivity – Having so many remote devices connected to a network creates new access points
for attacks.
5. Poor access control – improperly managing user roles, like providing some users more access than they need
to data and systems or not closing accounts for old employees, makes networks vulnerable from both inside
and outside breaches.
Vulnerability Testing
Vulnerability testing is a process of evaluating and identifying security weaknesses in a computer system,
network, or software application. It involves systematically scanning, probing, and analyzing systems and
applications to uncover potential vulnerabilities, such as coding errors, configuration flaws, or outdated software
components.
1. Active Testing
Active testing is a vulnerability testing method in which testers interact directly with the target system, network,
or application to identify potential security weaknesses. It typically involves sending inputs, requests, or packets
to the target and analyzing the responses to discover vulnerabilities.
Active testing can be intrusive and may cause disruptions or performance issues in the target system, but it is
usually more effective in finding vulnerabilities than passive testing. Examples of active testing include:
Port scanning to identify open ports and services running on a network.
Fuzz testing, which involves sending malformed or unexpected inputs to applications to discover
vulnerabilities related to input validation and error handling.
2. Passive Testing
Passive testing is a non-intrusive vulnerability testing method that involves observing and analyzing the target
system, network, or application without directly interacting with it. Passive testing focuses on gathering
information about the target, such as network traffic, configuration settings, or application behavior, to identify
potential vulnerabilities.
This method is less likely to cause disruptions or performance issues but may be less effective in finding
vulnerabilities compared to active testing. Examples of passive testing include:
Traffic monitoring to identify patterns or anomalies that may indicate security weaknesses.
Configuration reviews to assess security settings and identify misconfigurations.
3. Network Testing
Network testing is a vulnerability testing method focused on identifying security weaknesses in network
infrastructure, including devices, protocols, and configurations. It aims to discover vulnerabilities that could
allow unauthorized access, eavesdropping, or Denial of Service (DoS) attacks on the network.
Network testing typically involves both active and passive testing techniques to evaluate the network’s security
posture comprehensively. Examples of network testing include:
4. Distributed Testing
Distributed testing is a vulnerability testing method that involves using multiple testing tools or systems, often
deployed across different locations, to scan and analyze the target system, network, or application for
vulnerabilities.
This approach can help provide a more comprehensive view of the target’s security posture, as it helps identify
vulnerabilities that may be visible only from specific locations or under specific conditions. Distributed testing
can also help distribute the load of vulnerability testing, reducing the impact on the target system and increasing
the efficiency of the testing process.
Vulnerability testing tools are software applications or services designed to help organizations identify and
assess security weaknesses in their systems, networks, or applications. These tools automate the process of
vulnerability testing, making it more efficient, accurate, and consistent.
Network vulnerability scanners: These tools scan networks for open ports, misconfigurations, and other
security weaknesses.
Web application vulnerability scanners: These tools are specifically designed to identify vulnerabilities
in web applications, such as SQL injection, cross-site scripting (XSS), and broken authentication.
Static application security testing (SAST) tools: Designed to analyze source code or compiled code to
identify potential security vulnerabilities without executing the application.
Dynamic application security testing (DAST) tools: Built to interact with running applications to
identify security weaknesses during runtime.
Fuzz testing tools: Generate and send malformed or unexpected inputs to applications to identify
vulnerabilities related to input validation and error handling.
Configuration management and compliance tools: These tools assess system and application
configurations against established security best practices or compliance standards, such as CIS
Benchmarks or PCI DSS.
Container and cloud security tools: These tools focus on identifying vulnerabilities and
misconfigurations in cloud-based environments and containerized applications.
1. Develop a clear scope and plan: Clearly define the scope of the vulnerability testing, including the
systems, applications, and network segments that will be tested. Create a well-documented plan outlining
the testing process, tools, and methodologies to be used.
2. Conduct regular vulnerability assessments: Schedule vulnerability testing on a regular basis, as new
vulnerabilities and threats emerge constantly. Regular assessments help ensure that your organization
stays up-to-date with the latest security patches and configuration changes.
3. Use a combination of tools and techniques: Employ a combination of automated vulnerability scanners
and manual testing techniques, such as penetration testing, to achieve a comprehensive assessment.
Automated tools can quickly identify known vulnerabilities, while manual techniques can help uncover
more complex issues that may not be detected by automated scanners.
4. Prioritize vulnerabilities: Evaluate and prioritize identified vulnerabilities based on their severity,
potential impact, and ease of exploitation. Focus on addressing high-priority vulnerabilities first to
minimize the risk of a breach.
5. Patch management: Establish a robust patch management process that ensures timely application of
security patches and updates to mitigate identified vulnerabilities. This process should include monitoring
for new patches, testing them for compatibility, and deploying them across the organization.
6. Remediation and verification: Remediate identified vulnerabilities and verify that the applied fixes
have been effective in addressing the issues. This may require re-testing systems or applications to ensure
that no new vulnerabilities have been introduced.
7. Encourage cross-functional collaboration: Foster collaboration between IT, security, and other relevant
teams to ensure effective communication, coordination, and remediation efforts.
8. Educate and train staff: Raise security awareness among employees through regular training and
education programs. This helps create a security-conscious culture within the organization and reduces
the likelihood of human errors leading to security incidents.
9. Monitor and adapt: Continuously monitor the threat landscape and adapt your vulnerability testing
practices accordingly. Stay informed about emerging threats, new vulnerabilities, and best practices in
security testing.
10. Document and review: Maintain detailed documentation of vulnerability testing processes, results, and
remediation efforts. Regularly review and update these documents to ensure they remain relevant and
effective in addressing the organization’s security needs.
Vulnerability assessment
1. Host assessment – The assessment of critical servers, which may be vulnerable to attacks if not
adequately tested or not generated from a tested machine image.
2. Network and wireless assessment – The assessment of policies and practices to prevent unauthorized
access to private or public networks and network-accessible resources.
3. Database assessment – The assessment of databases or big data systems for vulnerabilities and
misconfigurations, identifying rogue databases or insecure dev/test environments, and classifying
sensitive data across an organization’s infrastructure.
4. Application scans – The identifying of security vulnerabilities in web applications and their source code
by automated scans on the front-end or static/dynamic analysis of source code.
The security scanning process consists of four steps: testing, analysis, assessment and remediation.
The objective of this step is to draft a comprehensive list of an application’s vulnerabilities. Security analysts test
the security health of applications, servers or other systems by scanning them with automated tools, or testing
and evaluating them manually. Analysts also rely on vulnerability databases, vendor vulnerability
announcements, asset management systems and threat intelligence feeds to identify security weaknesses.
2. Vulnerability analysis
The objective of this step is to identify the source and root cause of the vulnerabilities identified in step one.
It involves the identification of system components responsible for each vulnerability, and the root cause of the
vulnerability. For example, the root cause of a vulnerability could be an old version of an open source library.
This provides a clear path for remediation – upgrading the library.
3. Risk assessment
The objective of this step is the prioritizing of vulnerabilities. It involves security analysts assigning a rank or
severity score to each vulnerability, based on such factors as:
1. Which systems are affected.
2. What data is at risk.
3. Which business functions are at risk.
4. Ease of attack or compromise.
5. Severity of an attack.
6. Potential damage as a result of the vulnerability.
4. Remediation
The objective of this step is the closing of security gaps. It’s typically a joint effort by security staff,
development and operations teams, who determine the most effective path for remediation or mitigation of each
vulnerability.
Vulnerability assessment tools are designed to automatically scan for new and existing threats that can target
your application. Types of tools include:
1. Web application scanners that test for and simulate known attack patterns.
2. Protocol scanners that search for vulnerable protocols, ports and network services.
3. Network scanners that help visualize networks and discover warning signals like stray IP addresses,
spoofed packets and suspicious packet generation from a single IP address.
Features
Checks performed
Industry certifications
Pricing
Reporting
Creating a system security audit report involves compiling information gathered during the audit process into a
comprehensive document that highlights findings, vulnerabilities, and recommendations for improvement.
1. Executive Summary: Begin the report with an executive summary that provides an overview of the audit
process, objectives, methodologies used, and key findings. Summarize the most critical vulnerabilities and
recommendations for management's attention.
2. Introduction: Provide background information on the organization, including its industry, size, and any relevant
regulatory requirements. Explain the purpose of the security audit and the scope of the assessment.
3. Audit Methodology: Describe the methods and techniques used during the audit, such as vulnerability scanning,
penetration testing, policy review, interviews, and documentation analysis. Explain how the assessment was
conducted and any limitations encountered.
4. Findings and Observations: Present the findings of the security audit in detail. Organize the findings by
category (e.g., technical vulnerabilities, policy deficiencies, compliance issues) and provide a description of each
issue discovered.
5. Risk Assessment: Evaluate the severity and potential impact of each identified vulnerability or security gap. Use
a risk rating or scoring system to prioritize findings based on their likelihood and potential consequences.
6. Recommendations: Provide actionable recommendations for addressing each identified vulnerability or security
gap. Include specific steps that the organization should take to remediate the issues, along with suggested
timelines and responsible parties.
7. Supporting Evidence: Include evidence to support the findings and recommendations, such as screenshots, logs,
or documentation excerpts. This helps to validate the audit results and provide context for stakeholders.
8. Conclusions: Summarize the overall findings of the audit and reiterate the importance of addressing security
vulnerabilities to mitigate risks effectively. Highlight any overarching trends or patterns observed during the
assessment.
9. Appendices: Include additional supplementary information, such as detailed vulnerability scan reports, interview
transcripts, audit checklists, or regulatory compliance matrices. This allows readers to delve deeper into specific
aspects of the audit if needed.
10. Action Plan: Develop a comprehensive action plan that outlines steps for implementing the recommendations
and addressing the identified vulnerabilities. Include timelines, resource requirements, and accountability
mechanisms for each action item.
11. Executive Summary of Technical Findings: Provide a high-level summary of the technical vulnerabilities
discovered during the audit, along with their potential impact on the organization's security posture.
12. Management Response: Include a section for management to respond to the audit findings and
recommendations. Management should acknowledge the findings, indicate their commitment to addressing them,
and outline any additional steps they plan to take.
13. Distribution and Review: Distribute the audit report to relevant stakeholders, including senior management, IT
personnel, and any regulatory authorities if required. Schedule a review meeting to discuss the findings and
action plan with key stakeholders.
14. Follow-Up: Monitor the implementation of the action plan and track progress towards addressing the identified
vulnerabilities. Conduct follow-up assessments as needed to ensure that security improvements are effectively
implemented.
Element Description
Date range of the assessment
Purpose and scope of the assessment
Executive summary General status of the assessment and summary of your findings regarding risk to the
client
Disclaimer
Explanation of the scan results, such as how you’ve categorized and ordered
Scan results vulnerabilities
Overview of the types of reports provided
Tools and tests you used for vulnerability scanning, such as penetration testing or
cloud-based scans
Methodology
Specific purpose of each scan, tool, and test
Testing environments for each tool used in the assessment
Which systems identified by the client you successfully scanned and which you did not
Findings
Whether any systems were not scanned and, if so, the reasons why
Index of all vulnerabilities identified, categorized as critical, high, medium, or low
severity
Risk assessment Explanation of the above risk categories
List of all vulnerabilities with details on the plugin name, description, solution, and
count information
Full list of actions the client should take
Recommendations of other security tools the client can use to assess the network’s
Recommendations
security posture
Security policy and configuration recommendations
Monitoring tools
Monitoring metrics
Security policies
Security tools
Security audits
Security training
One of the first steps to monitor your computer systems is to choose the right tools for your needs. There are
many tools available for different purposes, such as system performance, network traffic, log analysis, security
alerts, and more.
Some of the popular tools are Nagios, Zabbix, Splunk, Snort, and OSSEC. These tools can help you collect,
analyze, and visualize data from your systems, and notify you of any issues or anomalies. You should also
configure your tools to generate regular reports and backups, and to integrate with other tools or platforms.
o system performance Monitoring tools
Sguil for Monitoring Post and Real-time Events
Sguil is a security monitoring tool based on TCL/TK that like Squert allows you to see events from multiple data
sources and sensors into a central repository. The main user interface of Sguil will characterize traffic and allow
you to see packet information, from here you can cross reference, categorize, and drill into events by pivoting
your views
•
Intrusion detection or intrusion prevention; mandated by PCI Requirement 11.4 “Use intrusion-detection and/or
intrusion-prevention techniques to detect and/or prevent intrusions into the network. Monitor all traffic at the
perimeter of the cardholder data environment as well as at critical points in the cardholder data environment, and
alert personnel to suspected compromises.”
•
File integrity monitoring; mandated by PCI Requirement 11.5 “Deploy a change-detection mechanism (for
example, file-integrity monitoring tools) to alert personnel to unauthorized modification of critical system
files, configuration files, or content files; and configure the software to perform critical file comparisons at least
weekly.
o network traffic
o log analysis
o security alerts
Monitoring metrics
Another important step to monitor your computer systems is to define the metrics that you want to track and
measure. Metrics are quantitative indicators that reflect the status or performance of your systems, such as CPU
usage, memory usage, disk space, network latency, response time, error rate, and more. You should also set
thresholds or baselines for your metrics, so that you can compare them with the actual values and detect any
deviations or problems. You should also prioritize your metrics according to their impact and relevance, and
focus on the ones that are most critical for your systems.
3Security policies
To secure your computer systems, you need to establish and enforce security policies that define the rules and
standards for your systems, such as who can access them, what they can do, how they should be configured, and
how they should be updated. Security policies can help you prevent unauthorized or malicious access, maintain
compliance with regulations or best practices, and reduce the risk of data breaches or system failures. You
should also document your security policies and communicate them to your users and stakeholders, and review
them regularly to ensure they are up to date and effective.
4Security tools
In addition to security policies, you also need to use security tools that can help you protect your computer
systems from various threats, such as malware, hackers, or denial-of-service attacks. Some of the common
security tools are antivirus software, firewalls, encryption software, VPNs, and password managers. These tools
can help you scan, block, encrypt, or authenticate your systems, and alert you of any suspicious or malicious
activity. You should also update your security tools regularly to keep them current and effective.
5Security audits
Another way to secure your computer systems is to conduct security audits that can help you assess the current
state of your systems, identify any vulnerabilities or weaknesses, and recommend any improvements or fixes.
Security audits can be performed by internal or external experts, using various methods, such as penetration
testing, vulnerability scanning, code review, or checklist review. Security audits can help you validate your
security policies and tools, comply with regulations or standards, and improve your security posture and
awareness.
6Security training
The last but not least step to secure your computer systems is to provide security training to your users and staff,
who are often the weakest link in your security chain. Security training can help you educate your users and staff
about the importance of security, the common threats and risks, and the best practices and behaviors to follow.
Security training can also help you foster a security culture and mindset, and increase the trust and confidence of
your users and stakeholders. You should also update your security training regularly to keep it relevant and
engaging.
The evaluation of information systems security is a process in which the evidence for assurance is identified,
gathered, and analysed against criteria for security functionality and assurance level. This can result in a measure
of trust that indicates how well the system meets particular security target.
Evaluation criteria provide a standard for quantifying the security of a computer system or network. These
criteria include the
Trusted Computer System Evaluation Criteria (TCSEC),
Trusted Network Interpretation (TNI),
European Information Technology Security Evaluation Criteria (ITSEC),
and the Common Criteria.
The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of
the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the
formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the
following objectives:
Measurement: Provides a metric for assessing comparative levels of trust between different computer systems.
Guidance: Identifies standard security requirements that vendors must build into systems to achieve a given
trust level.
Acquisition: Provides customers a standard for specifying acquisition requirements and identifying systems that
meet those requirements.
The four basic control requirements identified in the Orange Book are
Security policy: The rules and procedures by which a trusted system operates. Specific TCSEC requirements
include
o Discretionary access control (DAC): Owners of objects are able to assign permissions to other subjects.
o Mandatory access control (MAC): Permissions to objects are managed centrally by an administrator.
o Object reuse: Protects confidentiality of objects that are reassigned after initial use. For example, a deleted file
still exists on storage media; only the file allocation table (FAT) and first character of the file have been
modified. Thus residual data may be restored, which describes the problem of data remanence. Object-reuse
requirements define procedures for actually erasing the data.
o Labels: Sensitivity labels are required in MAC-based systems. Specific TCSEC labeling requirements include
integrity, export, and subject/object labels.
Assurance: Guarantees that a security policy is correctly implemented. Specific TCSEC requirements (listed
here) are classified as operational assurance requirements:
o System architecture: TCSEC requires features and principles of system design that implement specific security
features.
o System integrity: Hardware and firmware operate properly and are tested to verify proper operation.
o Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths
not protected by a system’s normal security mechanisms. A covert storage channel conveys information by
altering stored system data. A covert timing channel conveys information by altering a system resource’s
performance or timing.
º Trusted facility management: The assignment of a specific individual to administer the security-related
functions of a system. Closely related to the concepts of least privilege, separation of duties, and need-to-know.
º Trusted recovery: Ensures that security isn’t compromised in the event of a system crash or failure. This
process involves two primary activities: failure preparation and system recovery.
º Security testing: Specifies required testing by the developer and the National Computer Security Center
(NCSC).
º Design specification and verification: Requires a mathematical and automated proof that the design description
is consistent with the security policy.
º Configuration management: Identifying, controlling, accounting for, and auditing all changes made to the
Trusted Computing Base (TCB) during the design, development, and maintenance phases of a system’s lifecycle.
Accountability: The ability to associate users and processes with their actions. Specific TCSEC requirements
include
o Identification and authentication (I&A): Systems need to track who performs what activities.
o Trusted Path: A direct communications path between the user and the Trusted Computing Base (TCB) that
doesn’t require interaction with untrusted applications or operating-system layers.
o Audit: Recording, examining, analyzing, and reviewing security-related activities in a trusted system.
Documentation: Specific TCSEC requirements include
o Security Features User’s Guide (SFUG): User’s manual for the system.
o Trusted Facility Manual (TFM): System administrator’s and/or security administrator’s manual.
o Test documentation: According to the TCSEC manual, this documentation must be in a position to “show how
the security mechanisms were tested, and results of the security mechanisms’ functional testing.”
o Design documentation: Defines system boundaries and internal components, such as the Trusted Computing
Base (TCB).
The Orange Book defines four major hierarchical classes of security protection and numbered subclasses (higher
numbers indicate higher security):
D: Minimal protection
C: Discretionary protection (C1 and C2)
B: Mandatory protection (B1, B2, and B3)
A: Verified protection (A1)
These classes are further defined in this table.
TCSEC Classes
Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation
(TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the
Rainbow Series, it’s known as the Red Book.
Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange
Book) to networks. Part II of the TNI describes additional security features such as communications integrity,
protection from denial of service, and transmission security.
Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses
confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a Target of
Evaluation (TOE), rather than a single computing platform.
ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security
mechanisms, or how) and assurance (effectiveness and correctness) separately. The ten functionality (F) classes
and seven evaluation (E) (assurance) levels are listed in the following table.
ITSEC Functionality (F) Classes and Evaluation (E) Levels mapped to TCSEC levels
Common Criteria
The Common Criteria for Information Technology Security Evaluation (usually just called Common Criteria) is
an international effort to standardize and improve existing European and North American evaluation criteria. The
Common Criteria has been adopted as an international standard in ISO 15408. The Common Criteria defines
eight evaluation assurance levels (EALs), which are listed in the following table.
The Common Criteria
TCSEC ITSEC
Level Description
Equivalent Equivalent
EAL0 N/A N/A Inadequate assurance
EAL1 N/A N/A Functionally tested
EAL2 C1 E1 Structurally tested
EAL3 C2 E2 Methodically tested and checked
EAL4 B1 E3 Methodically designed, tested, and reviewed
EAL5 B2 E4 Semi-formally designed and tested
EAL6 B3 E5 Semi-formally verified design and tested
EAL7 A1 E6 Formally verified design and tested
Monitoring criteria
Monitoring criteria refer to the specific metrics, parameters, or indicators used to assess and measure the
performance, health, and security of computer systems and networks. These criteria help organizations
effectively monitor and evaluate their system security posture, detect potential security incidents, and ensure
compliance with security policies and standards. Here are some common monitoring criteria for computer
system security:
1. Availability: Measures the accessibility and uptime of critical systems and services. Availability monitoring
criteria include:
System uptime percentage
Downtime duration
Mean time to repair (MTTR)
Service-level agreements (SLAs) compliance
2. Integrity: Ensures the accuracy, consistency, and reliability of data and resources. Integrity monitoring criteria
include:
File integrity checks (e.g., checksum verification)
Database integrity checks
Configuration file integrity
Digital signatures verification
3. Confidentiality: Protects sensitive information from unauthorized access, disclosure, or theft. Confidentiality
monitoring criteria include:
Access control logs
User authentication attempts
Encryption status (e.g., SSL/TLS usage)
Data leakage prevention (DLP) alerts
4. Authentication and Authorization: Verifies the identity of users and controls their access to resources.
Monitoring criteria for authentication and authorization include:
Successful and failed login attempts
User account lockouts
Privileged access usage
Role-based access control (RBAC) violations
5. Security Events: Monitors for security-related events and anomalies that may indicate potential security
incidents. Security event monitoring criteria include:
Intrusion detection system (IDS) alerts
Firewall logs
Antivirus/anti-malware detections
Anomalous network traffic patterns
6. Compliance: Ensures adherence to security policies, regulations, and industry standards. Compliance monitoring
criteria include:
Regulatory compliance status (e.g., GDPR, HIPAA, PCI DSS)
Security policy violations
Audit trail reviews
Vulnerability assessment and remediation status
7. Performance: Evaluates the performance impact of security controls and measures on system resources.
Performance monitoring criteria include:
CPU and memory utilization
Network bandwidth usage
Disk I/O rates
Application response times
8. Incident Response: Tracks the effectiveness and efficiency of incident detection, response, and resolution
processes. Incident response monitoring criteria include:
Incident response times
Mean time to detect (MTTD)
Mean time to respond (MTTR)
Incident closure rates
1. Define Report Objectives: Clarify the purpose and objectives of the monitoring report. Determine the key
metrics, parameters, and indicators to be included in the report based on organizational requirements, stakeholder
needs, and monitoring goals.
2. Select Data Sources: Identify the sources of data to be included in the monitoring report. This may include logs,
event records, performance metrics, security alerts, and other relevant data collected from monitoring tools,
systems, and devices.
3. Gather Data: Collect data from the selected sources using monitoring tools, scripts, APIs, and manual
observations. Ensure that data collection methods are accurate, reliable, and representative of the monitored
environment.
4. Data Aggregation and Consolidation: Aggregate and consolidate collected data to create a unified dataset for
analysis. Normalize data formats, timestamps, and units of measurement to facilitate comparisons and trend
analysis across different sources.
5. Data Analysis: Analyze the collected data to identify trends, patterns, anomalies, and insights related to system
performance, health, and security. Use statistical analysis, visualization techniques, and trend analysis to
interpret the data effectively.
6. Report Structure:
Title Page: Include a title that clearly identifies the report as a monitoring report. Add the organization's
name, date of the report, and the names of individuals involved in the monitoring process.
Executive Summary: Provide a brief overview of key findings, trends, and observations highlighted in
the report. Summarize the main insights and implications for stakeholders.
Introduction: Introduce the scope, objectives, and methodology of the monitoring report. Describe the
systems, networks, and applications monitored and the period covered by the report.
Key Metrics and Performance Indicators: Present key performance metrics and indicators relevant to the
monitoring objectives. Include graphs, charts, and tables to visualize trends and comparisons over time.
Health Status: Assess the overall health status of monitored systems, networks, and applications based on
performance metrics, availability, uptime, and incident reports.
Security Analysis: Analyze security-related data, such as security events, alerts, vulnerabilities, and
compliance status. Highlight significant security incidents, breaches, and potential risks identified during
the monitoring period.
Recommendations: Provide actionable recommendations for improving system performance, health, and
security based on the findings of the monitoring report. Prioritize recommendations based on severity,
impact, and feasibility of implementation.
Conclusion: Summarize the main findings, insights, and recommendations of the monitoring report.
Emphasize the importance of continuous monitoring and proactive management to maintain a robust and
secure IT environment.
Appendices: Include additional information, supporting data, methodology details, glossary of terms, and
references as needed.
7. Review and Validation: Review the monitoring report for accuracy, completeness, and clarity. Validate findings
and conclusions with subject matter experts, stakeholders, and relevant teams to ensure the integrity of the
report.
8. Presentation and Distribution: Present the monitoring report to key stakeholders, including IT management,
security teams, and business leaders. Distribute the report electronically or in print format and facilitate
discussions to address questions, concerns, and action items identified in the report.
9. Follow-Up and Action Planning: Follow up on the recommendations and action items outlined in the monitoring
report. Develop an action plan with timelines, responsibilities, and milestones for implementing recommended
changes and improvements.
10. Continuous Monitoring and Reporting: Establish a process for ongoing monitoring and reporting to track
progress, measure outcomes, and adapt to changing conditions. Regularly update and iterate the monitoring
report to reflect new insights, trends, and developments over time.