Unit 7-Network Security
Unit 7-Network Security
Network security
A network vulnerability is an inherent weakness in the design, implementation, or use of a
hardware component or a software routine. A vulnerability invites attacks and makes the
network susceptible to threats.
A threat is anything that can disrupt the operation of the network. A threat can even be
accidental or an act of nature, but threats are mostly intentional. A threat can damage the
network, slow it down, or make it unavailable. Any type of rogue software represents a
threat.
An attack is a specific approach employed to exploit a known vulnerability. A passive attack
is designed to monitor and record network activity in an attempt to collect information to be
used later in an active attack. Examples of passive attacks are packet sniffing and traffic
analysis. Passive attacks are difficult to detect.
An active attack tries to damage a network or its operation. Such attacks are easier to detect,
but are also more damaging.
2. Some secret information shared by the two principals and, it is hoped, unknown to the
opponent. An example is an encryption key used in conjunction with the transformation
Digital Certificates
Digital Certificates provide a means of proving your identity in electronic transactions, much
like a driver license or a passport does in face-to-face interactions. The only difference is that
a digital certificate is used in conjunction with a public key encryption system. Digital
certificates are electronic files that simply work as an online passport. Digital certificates are
issued by a third party known as a Certification Authority such as VeriSign or Thawte. These
third party certificate authorities have the responsibility to confirm the identity of the
certificate holder as well as provide assurance to the website visitors that the website is one
that is trustworthy and capable of serving them in a trustworthy manner.
Digital certificates have two basic functions. The first is to certify that the people, the
website, and the network resources such as servers and routers are reliable sources, in other
words, who or what they claim to be. The second function is to provide protection for the
data exchanged from the visitor and the website from tampering or even theft, such as credit
card information.
Two parties are involved in the use of certificates. One party uses a certificate to identify
itself, the other party must validate it. This process is referred to as a handshake. The
protocol that is used is Secure Sockets Layer/Transport Level Security (SSL/TLS). For the
handshake process to work, both parties must store the certificates in their own certificate
store. The certificate store is also referred as a keystore or a key database.
But where do you obtain the certificates to put in the certificate store for the SSL/TLS protocol?
A certificate authority (CA) issues certificates. There are well-known CAs that sell
certificates. There are internal CAs that issue certificates for their own enterprise. All
certificates are created by using common standards. That is, no matter which CA sells you
the certificate, no matter what CA and on what platform the certificate is created, it can be
used by any application on any platform. Therefore, choosing the CA is an independent
consideration of the platform on which the certificate is being used.
X.509 Certificates
ITU-T recommendation X.509 is part of the X.500 series of recommendations that define a
directory service. The directory is, in effect, a server or distributed set of servers that
maintains a database of information about users. The information includes a mapping from
user name to network address, as well as other attributes and information about the users.
X.509 defines a framework for the provision of authentication services by the X.500
directory to its users. The directory may serve as a repository of public-key certificates. Each
certificate contains the public key of a user and is signed with the private key of a trusted
certification authority. In addition, X.509 defines alternative authentication protocols based
on the use of public-key certificates.
X.509 is an important standard because the certificate structure and authentication
protocols defined in X.509 are used in a variety of contexts. For example, the X.509 certificate
format is used in S/MIME, IP Security, and SSL/TLS.
X.509 was initially issued in 1988. The standard was subsequently revised to address some
of the security concerns. a revised recommendation was issued in 1993. A third version was
issued in 1995 and revised in 2000.
X.509 is based on the use of public-key cryptography and digital signatures. The standard
does not dictate the use of a specific algorithm but recommends RSA. The digital signature
scheme is assumed to require the use of a hash function. Again, the standard does not dictate
a specific hash algorithm. The 1988 recommendation included the description of a
recommended hash algorithm; this algorithm has since been shown to be insecure and was
dropped from the 1993 recommendation.
Certificates
The heart of the X.509 scheme is the public-key certificate associated with each user. These
user certificates are assumed to be created by some trusted certification authority (CA) and
placed in the directory by the CA or by the user. The directory server itself is not responsible
for the creation of public keys or for the certification function; it merely provides an easily
accessible location for users to obtain certificates.
Following figure shows the general format of a certificate, which includes the following
elements.
•Version: Differentiates among successive versions of the certificate format; the default is
version 1. If the issuer unique identifier or subject unique identifier are present, the value
must be version 2. If one or more extensions are present, the version must be version 3.
• Serial number: An integer value unique within the issuing CA that is unambiguously
associated with this certificate.
• Signature algorithm identifier: The algorithm used to sign the certificate together with any
associated parameters. Because this information is repeated in the signature field at the end
of the certificate, this field has little, if any, utility.
• Issuer name: X.500 is the name of the CA that created and signed this certificate.
• Period of validity: Consists of two dates: the first and last on which the certificate is valid.
• Subject name: The name of the user to whom this certificate refers. That is, this certificate
certifies the public key of the subject who holds the corresponding private key.
• Subject’s public-key information: The public key of the subject, plus an identifier of the
algorithm for which this key is to be used, together with any associated parameters.
• Issuer unique identifier: An optional-bit string field used to identify uniquely the issuing CA
in the event the X.500 name has been reused for different entities.
Subject unique identifier: An optional-bit string field used to identify uniquely the subject in
the event the X.500 name has been reused for different entities.
• Extensions: A set of one or more extension fields. Extensions were added in version 3 and
are discussed later in this section.
• Signature: Covers all of the other fields of the certificate; it contains the hash code of the
other fields encrypted with the CA’s private key. This field includes the signature algorithm
identifier.
Expiration or revocation: The digital certificate expires and the end of the requested
usage period or it might be revoked by an administrator for any valid reason, such as
the keys being compromised.
Digital certificates have a lifetime during which they are considered valid. When this lifetime
expires, the certificate can no longer be used for authentication and must be updated to
restore its validity. A certificate can also become invalid from being revoked by a CA.
Common reasons for which a CA might revoke a digital certificate include a change in job
status or suspicion of a compromised private key.
The CA typically provides a means to revoke digital certificates (certificate revocation). This
process depends on the mechanism that the CA for each certificate makes available to
communicate certificate revocation. Typically, a client that utilizes a PKI can check with the
CA to update its list of revoked digital certificates so that it can fail the authentication of any
revoked identities. This process can be manual or automated, depending on how the PKI is
able to respond to each CAs revocation process. At a minimum, manually removing a revoked
identity from a server key store or a revoked root CA certificate from a client certificate store
is sufficient to handle the revocation once it is known.
change the public key in the message and act as a man-in-the-middle. But in practice, this
mechanism is reasonably secure.
If Alice signs a certificate vouching for Bob's name and key, then Alice is the issuer and Bob
is the subject. If Alice wants to find a path to Bob's key, then Bob's name is the target. If Alice
is evaluating a chain of certificates, she is the verifier, sometimes called the relying party.
Anything that has a public key is known as a principal. A trust anchor is a public key that the
verifier has decided through some means is trusted to sign certificates. In a verifiable chain
of certificates, the first certificate will have been signed by a trust anchor.
making it difficult to verify the certificate, thereby increasing the burden on users. Therefore,
such model is applicable to small, small number of groups of coequal status of the
implementation of the organizational PKI.
b. Bridge CA
The “Bridge PKI” model is designed to support PKI applications across enterprises and avoid
the situation where the user has to maintain
information of a large number of trust points
or an organization needs to establish
crosslink to a large number of other
organizations. Using BCAs (in place of
bilateral arrangements between separate
PKIs) can decrease the total number of cross
certificates required to join the PKIs. The BCA does not become a trust anchor for any of the
PKIs as it is not directly trusted by any of the PKI entities. Rather trust is referenced from
internal PKI trust anchors.
c. Hierarchical trust model
Hierarchical trust model is like an inverted
tree structure, in which root is the starting
point of trust, that we all trusted root CA,
the top-down parts of the branches have a
CA, the leaf node is the user. Root CA for
the issuance of its certificate of direct
descendants of the node; intermediate nodes as its direct descendant CA. CA issued
certificates node; intermediate nodes that the end-user CA can issue certificates, but for the
end-user certificates issued under the CA cannot have a CA. All nodes of the model have to
trust the root CA, and keep a root CA's public key certificate. Communication between any
two users, in order to validate each other's public key certificate, must be achieved through
the root CA.
PKIX
The Internet Engineering Task Force (IETF) Public Key Infrastructure X.509 (PKIX) working
group has been the driving force behind setting up a formal (and generic) model based on
X.509 that is suitable for deploying a certificate-based architecture on the Internet.
Figure alongside shows the
interrelationship among the key
elements of the PKIX model. These
elements are
• End entity: A generic term used to
denote end users, devices (e.g., servers,
routers), or any other entity that can be
identified in the subject field of a public
key certificate. End entities typically
consume and/or support PKI-related
services.
• Certification authority (CA): The issuer
of certificates and (usually) certificate
revocation lists (CRLs). It may also support a variety of administrative functions, although
these are often delegated to one or more Registration Authorities.
• Registration authority (RA): An optional component that can assume a number of
administrative functions from the CA. The RA is often associated with the end entity
registration process but can assist in a number of other areas as well.
• CRL issuer: An optional component that a CA can delegate to publish CRLs.
• Repository: A generic term used to denote any method for storing certificates and CRLs so
that they can be retrieved by end entities.
Email Security
Email security refers to the collective measures used to secure the access and content of an
email account or service. It allows an individual or organization to protect the overall access
protocols. In particular, the Hypertext Transfer Protocol (HTTP), which provides the
transfer service for Web client/server interaction, can operate on top of SSL. Three higher-
layer protocols are defined as part of SSL: the Handshake Protocol, the Change Cipher Spec
Protocol, and the Alert Protocol. These SSL-specific protocols are used in the management of
SSL exchanges.
Two important SSL concepts are the SSL session and the SSL connection, which are defined
in the specification as follows.
• Connection: A connection is a transport (in the OSI layering model definition) that provides
a suitable type of service. For SSL, such connections are peer-to-peer relationships. The
connections are transient. Every connection is associated with one session.
• Session: An SSL session is an association between a client and a server. Sessions are created
by the Handshake Protocol. Sessions define a set of cryptographic security parameters which
can be shared among multiple connections. Sessions are used to avoid the expensive
negotiation of new security parameters for each connection.
Figure alongside indicates the overall operation of the SSL Record Protocol. The Record
Protocol takes an application
message to be transmitted,
fragments the data into
manageable blocks, optionally
compresses the data, applies a
MAC, encrypts, adds a header,
and transmits the resulting unit
in a TCP segment. Received data
are decrypted, verified,
decompressed, and
reassembled before being
delivered to higher-level users.
The first step is fragmentation. Each upper-layer message is fragmented into blocks of 214
bytes (16384 bytes) or less. Next, compression is optionally applied. Compression must be
lossless and may not increase the content length by more than 1024 bytes. In SSLv3 (as well
Where,
Next, the compressed message plus the MAC are encrypted using symmetric encryption.
Encryption may not increase the content length by more than 1024 bytes, so that the total
length may not exceed 214 + 2048.
The final step of SSL Record Protocol processing is to prepare a header consisting of the
following fields:
• Content Type (8 bits): The higher-layer protocol used to process the enclosed fragment.
• Major Version (8 bits): Indicates major version of SSL in use. For SSLv3, the value is 3.
• Minor Version (8 bits): Indicates minor version in use. For SSLv3, the value is 0.
• Compressed Length (16 bits): The length in bytes of the plaintext fragment (or compressed
fragment if compression is used).The maximum value is 0.
The content types that have been defined are change_cipher_spec, alert, handshake, and
application_data. The first three
are the SSL-specific protocols,
discussed next. Note that no
distinction is made among the
various applications (e.g., HTTP)
that might use SSL; the content of
the data created by such
applications is opaque to SSL.
Handshake Protocol
IP Security
In 1994, the Internet Architecture Board (IAB) issued a report titled “Security in the Internet
Architecture” (RFC 1636). The report identified key areas for security mechanisms. Among
these were the need to secure the network infrastructure from unauthorized monitoring and
control of network traffic and the need to secure end-user-to-end-user traffic using
authentication and encryption mechanisms.
To provide security, the IAB included authentication and encryption as necessary security
features in the next-generation IP, which has been issued as IPv6. Fortunately, these security
capabilities were designed to be usable both with the current IPv4 and the future IPv6. This
means that vendors can begin offering these features now, and many vendors now do have
some IPsec capability in their products. The IPsec specification now exists as a set of Internet
standards.
Applications of IPsec
IPsec provides the capability to secure communications across a LAN, across private and
public WANs, and across the Internet. Examples of its use include:
• Secure branch office connectivity over the Internet: A company can build a secure virtual
private network over the Internet or over a public WAN. This enables a business to rely
heavily on the Internet and reduce its need for private networks, saving costs and network
management overhead.
• Secure remote access over the Internet: An end user whose system is equipped with IP
security protocols can make a local call to an Internet Service Provider (ISP) and gain secure
access to a company network. This reduces the cost of toll charges for traveling employees
and telecommuters.
• Establishing extranet and intranet connectivity with partners: IPsec can be used to secure
communication with other organizations, ensuring authentication and confidentiality and
providing a key exchange mechanism.
• Enhancing electronic commerce security: Even though some Web and electronic commerce
applications have built-in security protocols, the use of IPsec enhances that security. IPsec
guarantees that all traffic designated by the network administrator is both encrypted and
authenticated, adding an additional layer of security to whatever is provided at the
application layer.
The principal feature of IPsec that enables it to support these varied applications is that it
can encrypt and/or authenticate all traffic at the IP level. Thus, all distributed applications
(including remote logon, client/server, e-mail, file transfer, Web access, and so on) can be
secured.
Benefits of IPsec
Some of the benefits of IPsec:
• When IPsec is implemented in a firewall or router, it provides strong security that can be
applied to all traffic crossing the perimeter. Traffic within a company or workgroup does not
incur the overhead of security-related processing.
• IPsec in a firewall is resistant to bypass if all traffic from the outside must use IP and the
firewall is the only means of entrance from the Internet into the organization.
• IPsec is below the transport layer (TCP, UDP) and so is transparent to applications. There
is no need to change software on a user or server system when IPsec is implemented in the
firewall or router. Even if IPsec is implemented in end systems, upper-layer software,
including applications, is not affected.
• IPsec can be transparent to end users. There is no need to train users on security
mechanisms, issue keying material on a per-user basis, or revoke keying material when users
leave the organization.
• IPsec can provide security for individual users if needed. This is useful for offsite workers
and for setting up a secure virtual subnetwork within an organization for sensitive
applications.
Firewalls
A firewall forms a barrier through which the traffic going in each direction must pass. A
firewall security policy dictates which traffic is authorized to pass in each direction. A
firewall may be designed to operate as a filter at the level of IP packets, or may operate at a
higher protocol layer.
Firewalls can be an effective means of protecting a local system or network of systems from
network-based security threats while at the same time affording access to the outside world
via wide area networks and the Internet.
Firewall Characteristics
1. All traffic from inside to outside, and vice versa, must pass through the firewall. This is
achieved by physically blocking all access to the local network except via the firewall. Various
configurations are possible, as explained later in this chapter.
2. Only authorized traffic, as defined by the local security policy, will be allowed to pass.
Various types of firewalls are used, which implement various types of security policies, as
explained later in this chapter.
3. The firewall itself is immune to penetration. This implies the use of a hardened system
with a secured operating system. Trusted computer systems are suitable for hosting a
firewall and often required in government applications.
Types of Firewalls:
application, is 25. The TCP port number for the SMTP client is a number between 1024 and
65535 that is generated by the SMTP client.
In general, when an application that uses TCP creates a session with a remote host, it creates
a TCP connection in which the TCP port number for the remote (server) application is a
number less than 1024 and the TCP port number for the local (client) application is a number
between 1024 and 65535.The numbers less than 1024 are the “well-known” port numbers
and are assigned permanently to particular applications (e.g., 25 for server SMTP). The
numbers between 1024 and 65535 are generated dynamically and have temporary
significance only for the lifetime of a TCP connection.
A simple packet filtering firewall must permit inbound network traffic on all these high-
numbered ports for TCP-based traffic to occur. This creates a vulnerability that can be
exploited by unauthorized users.
A stateful inspection packet firewall tightens up the rules for TCP traffic by creating a
directory of outbound TCP connections. There is an entry for each currently established
connection. The packet filter will now allow incoming traffic to high-numbered ports only
for those packets that fit the profile of one of the entries in this directory.
A stateful packet inspection firewall reviews the same packet information as a packet
filtering firewall, but also records information about TCP connections (Figure c). Some
stateful firewalls also keep track of TCP sequence numbers to prevent attacks that depend
on the sequence number, such as session hijacking. Some even inspect limited amounts of
application data for some well-known protocols like FTP, IM and SIPS commands, in order
to identify and track related connections.
3. Circuit-Level Gateway
circuit-level gateway does not permit an end-to-end TCP connection; rather, the gateway
sets up two TCP connections, one between itself and a TCP user on an inner host and one
between itself and a TCP user on an outside host. Once the two connections are established,
the gateway typically relays TCP segments from one connection to the other without
examining the contents. The security function consists of determining which connections
will be allowed.
A typical use of circuit-level gateways is a situation in which the system administrator trusts
the internal users. The gateway can be configured to support application- level or proxy
service on inbound connections and circuit-level functions for outbound connections. In this
configuration, the gateway can incur the processing overhead of examining incoming
application data for forbidden functions but does not incur that overhead on outgoing data.
4. Application-Level Gateway
An application-level gateway, also called an application proxy, acts as a relay of application-
level traffic (Figure d). The user contacts the gateway using a TCP/IP application, such as
Telnet or FTP, and the gateway asks the user for the name of the remote host to be accessed.
When the user responds and provides a valid user ID and authentication information, the
gateway contacts the application on the remote host and relays TCP segments containing the
application data between the two endpoints. If the gateway does not implement the proxy
code for a specific application, the service is not supported and cannot be forwarded across
the firewall. Further, the gateway can be configured to support only specific features of an
application that the network administrator considers acceptable while denying all other
features.
Application-level gateways tend to be more secure than packet filters. Rather than trying to
deal with the numerous possible combinations that are to be allowed and forbidden at the
TCP and IP level, the application-level gateway need only scrutinize a few allowable
applications. In addition, it is easy to log and audit all incoming traffic at the application level.
A prime disadvantage of this type of gateway is the additional processing overhead on each
connection. In effect, there are two spliced connections between the end users, with the
gateway at the splice point, and the gateway must examine and forward all traffic in both
directions.
5. Next Generation Firewall (NGFW)
A next generation firewall (NGFW) is, as Gartner defines it, a “deep-packet inspection
firewall that moves beyond port/protocol inspection and blocking to add application-level
inspection, intrusion prevention, and bringing intelligence from outside the firewall.”
VPN: Working
At its most basic level, VPN tunneling creates a point-to-point connection inaccessible to
unauthorized users. To create the tunnel, VPNs use a tunneling protocol over existing
networks. Different VPNs use different tunneling protocols, such as OpenVPN or Secure
Socket Tunneling Protocol (SSTP). The tunneling protocol provides data encryption at
varying strengths depending on the platform using the VPN, such as Windows operating
system (OS) using SSTP. The endpoint device must run a VPN client (software application)
locally or in the cloud. The client runs in the background and isn't noticeable to end users
unless it creates performance issues.
VPNs associate a user's search history with the VPN server's IP address. VPN services have
servers located in different geographic areas. By using a VPN tunnel, a user's device connects
to another network. This hides its IP address and encrypts the data, shielding private
information from attackers or others hoping to gain access to an individual's activities. The
tunnel connects a user's device to an exit node in another distant location, which makes it
seem like the user is from that location.
Types of VPNs
Remote access
A remote access VPN securely connects a device outside the corporate office. These devices
are known as endpoints and may be laptops, tablets, or smartphones. Advances in VPN
technology have allowed security checks to be conducted on endpoints to make sure they
meet a certain posture before connecting. Think of remote access as computer to network.
Site-to-site
A site-to-site VPN connects the corporate office to branch offices over the Internet. Site-to-
site VPNs are used when distance makes it impractical to have direct network connections
between these offices. Dedicated equipment is used to establish and maintain a connection.
Think of site-to-site access as network to network.
Mobile VPN
In a mobile VPN, the server still sits at the edge of the organization's network, enabling
secure tunneled access by authenticated, authorized clients. Mobile VPN tunnels are not tied
to physical IP addresses, however. Instead, each tunnel is bound to a logical IP address. That
logical IP address stays bound to the mobile device. An effective mobile VPN provides
continuous service to users and can switch across access technologies and multiple public
and private networks.
Hardware VPN
Hardware VPNs offer a number of advantages over software-based VPNs. In addition to
offering enhanced security, hardware VPNs can provide load balancing for large client loads.
Web browser interfaces manage administration. A hardware VPN is more expensive than a
software-based one. Because of the cost, hardware VPNs are more viable for larger
businesses. Several vendors offer devices that can function as hardware VPNs.
Cloud VPN
Cloud VPNs provide users with a secure internet connection via the cloud. This is beneficial
for organizations with cloud-based network infrastructure or distributed and remote
workforces. With a cloud VPN, users can access corporate networks remotely regardless of
location.
VPN appliance
A VPN appliance, also known as a VPN gateway appliance or SSL VPN appliance, is a network
device with enhanced security features. It is a router that provides protection, authorization,
authentication and encryption for VPNs.
call, for example -- the spoke contacts the hub, obtains the needed information about the
other end and creates a dynamic IPsec VPN tunnel directly between them.
References
Cryptography and Vetwork Security: Principles and Practice. (2011). New York: Prentice Hall.