Peer To Peer File Sharing
Peer To Peer File Sharing
Abstract
This project presents an anonymous peer-to-peer (P2P) file sharing system, i.e., APTPFS. A
P2P network consists of a large number of peers interconnected together to share all kinds of
digital content. A key weakness of most existing P2P systems is the lack of anonymity.
Without anonymity, it is possible for third parties to identify the participants involved. In
their seminal article, Bennet and Grothoff conclude that there are three basic anonymity
requirements in P2P system. First, anonymous P2P system should make it impossible for
third parties to identify the participants involved. Second, anonymous P2P system should
guarantee that only the content receiver knows the content. Third, anonymous P2P system
should allow the content publisher to plausibly deny that the content originated from him or
her. Inside this report, various techniques of P2P networking and cryptography are presented.
This is followed by a discussion on the design of our APTPFS system and the issues
associated with implementation. The testing environment and analysis of the results are then
fully discussed. We conclude by mentioning future work.
Introduction
In the past few years, personal mobile devices such as laptops, pdas, and smartphones have
been more and more popular. Indeed, the number of smartphone usersincreased by 118
million across the world in 2007, and is expected to reach around 300 million by 2013 . The
incredibly rapid growth of mobile users is leading to a promising future, in which they can
freely share files between each other whenever and wherever. The number of mobile
searching users (through smartphones, feature phones, tablets, etc.) Is estimated to reach
901.1 million in2013. Currently, mobile users interact with each other and share files via an
infrastructure formed by geographically distributed base stations. However, users may find
themselves in an area without wireless service (e.g., mountain areas and rural areas).
Moreover, users may hope to reduce the cost on the expensive infrastructure network
data.The p2p file sharing model makes large-scale networks a blessing instead of a curse, in
which nodes share files directly with each other without a centralized server. Wiredp2p file
sharing systems (e.g., bittorrent and kazaa )have already become a popular and successful
paradigm for file sharing among millions of users. The successful deployment of p2p file
sharing systems and the aforementioned Impediments to file sharing in manets make the p2p
file sharing over manets (p2p manets in short) a promising complement to current
infrastructure model to realize pervasive file sharing for mobile users. As the mobile digital
devices are carried by people that usually Belong to certain social relationships, in this paper,
we focuson the p2p file sharing in a disconnected manetcommunity consisting of mobile
users with social networkproperties. In such a file sharing system, nodes meet and exchange
requests and files in the format of text, shortvideos, and voice clips in different interest
categories. Peer-to-peer (P2P) file sharing has emerged as the primary siphon of Internet
bandwidth. Beginning with the Napster phenomenon of the late 1990s, the popularity of P2P
has dramatically increased the volume of data transferred between Internet users. As a result,
a rump percentage of the global Internet subscriber base is consuming a disproportionate
share of bandwidth – certainly more than the per-user amounts provisioned by service
providers. Recent studies suggest that file sharing activity accounts for up to 60% of the
traffic on any given service provider network. While asymmetric bandwidth consumption is a
legitimate concern on its own, the ad-hoc nature of P2P communication means that a large
amount of data traffic is pushed off-network (P2P clients don’t care where other P2P clients
are located) – driving up network access (NAP) fees. By inflating the financial pressure on
service providers’ already low margins, P2P is quickly undermining the business model for
basic Internet access. In the face of surging offnetwork traffic, the traditional provider
approach – managing network costs through oversubscription – is no longer sufficient. Yet
the enormous popularity of file sharing and the breadth of competing protocols makes
blocking P2P traffic a practical impossibility. Service providers have begun to experiment
with tiered pricing based on monthly bandwidth consumption, or capping the amount of
bandwidth available to P2P applications, but these approaches can easily be positioned as
punitive by Internet lobby groups and competitors, generating dissatisfaction amongst
subscribers and potentially aggravating customer churn. This document explores the
technology and infrastructure behind peer-to-peer file sharing, and its implications for long-
term service provider profitability. Peer-to-peer file sharing systems have become more
popular for sharing, exchanging, and transferring files among many users over the internet. In
the peer-topeer network, a central point is not necessary. In addition, a peer-to-peer sharing
system is getting more attention in computing. Many peer-to-peer file sharing systems are
available between computers, such as Napster, Gnutella, and Freenet over the internet. Most
of the internet traffic is due to file sharing systems. Napster uses a server to communicate
between the users, and each user should contact the server in order to get the data. Gnutella,
however, depends on the client/server approach where the peer sends a query to all peers in
the network. The main issue in peer-to-peer file sharing is deciding which protocol to use for
finding and indexing information that generates the least amount of internet traffic. P2P file
sharing programs allow computers to download files and make them available to other users
on the network. P2P users can designate the drives and folders from which files can be
shared. In turn, other users can download and view any files stored in these designated areas.
People who use P2P file sharing software can inadvertently share files. They might
accidentally choose to share drives or folders that contain sensitive information, or they could
save a private file to a shared drive or folder by mistake, making that private file available to
others. In addition, viruses and other malware can change the drives or folders designated for
sharing, putting private files at risk. As a result, instead of just sharing music, a user’s
personal tax returns, private medical records or work documents could end up in general
circulation on P2P networks. Once a user on a P2P network downloads someone else’s files,
the files can’t be retrieved or deleted. What’s more, files can be shared among computers
long after they have been deleted from the original source computer. And if there are security
flaws or vulnerabilities in the P2P file sharing software or on an organization’s network, the
P2P program could open the door to attacks on other computers on the network.
EXISTING SYSTEM:
Content distribution is a centralized one, where the content is distributed from the centralized
server to all clients requesting the document. Clients send request to the centralized server for
downloading the file. Server accepts the request and sends the file as response to the
request.In most client-server setups, the server is a dedicated computer whose entire purpose
is to distribute files. There are many peer-to-peer file sharing applications between computers
and mobiles that share files between each other over the network. Some of these applications
work with computers and some on mobiles. However, peer-to-peer applications cause heavy
traffic on the internet. SymTorrent, BitTorrent, Gnutella, Napster, and eMule are the most
popular applications in peer-to-peer systems. Many studies have been performed on the
algorithm, modelling, and measurements of different Peer to Peer applications. Some of these
are reviewed in this report in order to help the developer to create a good system between
mobile devices or other potential devices. The decision to ban or allow P2P file sharing
programs on your organization’s network involves a number of factors. For example, what
are the types and locations of sensitive infor - mation on your network? Which computers
have access to files with sensitive information? What security measures are already in place
to protect those files? If your network has sensitive information that isn’t necessary to
conduct business, your best bet is to delete it – securely and permanently. To help you
determine the kinds of files that might be deleted, read Protecting Personal Information. But
if your network has sensitive information that is neces - sary to conduct business, weigh the
benefits of using P2P file sharing programs against the security risks associated with the
programs. Is there a business need to share files outside your organization? If so, are there
more secure ways for your employees to share files? Whether you decide to ban P2P file
sharing programs on your network or allow them, it’s important to create a policy and take
the appropriate steps to implement and enforce it. That will reduce the risk that any sensitive
information will be shared unintentionally.
Disadvantages
Scalability problem arises when multi requests arises at a single time.
Servers need heavy processing power
Downloading takes hours when clients increases
Requires heavy storage in case of multimedia content
PROPOSED SYSTEM:
Peer-to-peer content distribution provides more resilience and higher availability
through wide-scale replication of content at large numbers of peers. A P2P content
distribution community is a collection of intermittently-connected nodes with each node
contributing storage, content and bandwidth to the rest of the community. The peer-to-peer
file sharing networks had a centralized server system. This system controls traffic amongst
the users. The servers store directories of the shared files of the users and are updated when a
user logs on. In the centralized peer-to-peer model, a user would send a search to the
centralized server of what they were looking for. The server then sends back a list of peers
that have the data and facilitates the connection and download. The Server-Client system is
quick and efficient because the central directory is constantly being updated and all users had
to be registered to use the program. Peer-to-peer (P2P) file sharing technology is a
collaborative technology which is based on individual users making computer resources
freely available, through their internet connections. The resources include files, computing
services, network bandwidth etc. In P2P network systems, there is little or no central control
and the resources are shared among all the individuals. Node in the network plays the role
both of a client and a server. However, the main hurdle before us is to make such resources
readily available without connecting to the present web based powerful central server. P2P
works by using simultaneous and multiple Internet connections between peers who join
together in the same connected “community”. Traditional Internet clients and P2P clients
differs from one another in a way, that is in P2P the clients are sometimes called servents,
since they act as both servers and clients. Sometimes it acts as a server and provides the
information or replies the queries from other servents and sometimes it acts as a client when
it makes the request for information to other servents. When large numbers of such servent
applications form a network together, they constitute a powerful and robust information and
computational infrastructure that is independent of any single central server. The success of
P2P systems can be attributed to a collection of sophisticated client programs that provide
stability and effective, decentralized search services. Traditional internet service is time
consuming, expensive and bulky. The solution of this problem can be peer-to-peer (P2P) file
sharing system. In distance learning this process has great scope as it provides a healthy
network between the students, researchers and faculties of same community. Nowadays
distance education is a very popular system through which anybody can upgrade their
qualification or acquire knowledge in various fields. This system is specially made for those
who are living in remote and rural areas, physically challenged and working people etc. Peer-
to-peer file sharing system will give them the right platform to gather knowledge since this
system is the cheapest, time saving and user friendly. This process can play a significant role
in future internet services for distance education. In this paper we discuss about peer-to-peer
file sharing system and its importance in distance education.
System Requirements
Hardware Requirements:
System : Pentium IV 2.4 GHz.
Hard Disk : 240 GB.
Monitor : 14’ Colour Monitor.
Mouse : Optical Mouse.
Ram : 2 GB.
Software Requirements:
Operating system : Windows 7 Ultimate.
Coding Language : ASP.Net with C#
Front-End : Visual Studio 2010 Professional.
Data Base : SQL Server 2008.
Software Description:
.NET Frame Work
The .NET Framework is a new computing platform that simplifies application
development in the highly distributed environment of the Internet. The .NET
Framework is designed to fulfil the following objectives:
To provide a consistent object-oriented programming environment whether object
code is stored and executed locally, but Internet-distributed, or executed remotely.
To provide a code-execution environment that guarantees safe execution of code,
including code created by an unknown or semi-trusted third party.
To provide a code-execution environment that eliminates the performance problems
of scripted or interpreted environments.
To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
To build all communication on industry standards to ensure that code based on
the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language runtime is the
foundation of the .NET Framework. One can think of the runtime as an agent that
manages code at execution time, providing core services such as memory
management, thread management, and remoting, while also enforcing strict type
safely and other forms of code accuracy that ensure security and robustness.
In fact, the concept of code management is a fundamental principle of the
runtime. Code that targets the runtime is known as managed code, while code that
does not target the runtime is known as unmanaged code.
The class library, the other main component of the .NET Framework, is a
comprehensive, object-oriented collection of reusable types that you can use to
develop applications ranging from traditional command-line or graphical Voter
interface (GUI) applications to applications based on the latest innovations provided
by ASP.NET, such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of managed
code, thereby creating a software environment that can exploit both managed and
unmanaged features.
The .NET Framework not only provides several runtime hosts, but also
supports the development of third-party runtime hosts.
The .NET Framework is a multi-language environment for building,
deploying, and running XML Web services and applications. It consists of three main
parts:
ASP.NET
ASP.NET is more than the next version of Active Server Pages (ASP); it is a
unified web development platform that provides the services necessary for developers
to build enterprise-class Web applications. While ASP.NET is largely syntax
compatible with ASP, it also provides a new programming model and infrastructure
for more secure, scalable and stable applications. You can feel free to augment your
existing ASP application by incremental adding ASP.NET functionality to them.
ASP.NET is a compiled, .NET-based environment; you can author
applications in any .NET compatible language, including Visual Basic .NET, and
Jscript .NET. Additionally, the entire .NET Framework is available to any ASP.NET
application. Developers can easily access the benefits of these technologies, which
include the managed common language runtime environment, type safely, inheritance,
and so on.
ASP.NET has been designed to work seamlessly with WYSIWYG HTML
editors and other programming tools, including Microsoft Visual Studio .NET. Not
only does this make Web development easier, but it is also provides all the benefits
that these tools have to offer, including a GUI that developers can use to drop server
controls onto a Web page and fully integrated debugging support.
Developers can use Web Forms or XML Web services when creating an
ASP.NET application, or combine these in any way they see fit. Each is supported by
the same infrastructure that allows you to use authentication schemes; caches
frequently used data, or customize your application’s configuration, to name only a
few possibilities.
Web Forms allow you to build powerful forms-based Web pages. When
building these pages, you can use ASP.NET server controls to create common UI
elements, and program them for common tasks. These controls allow you to rapidly
build a Web Form out of reusable built-in or custom components, simplifying the
code of a page.
An XML Web service provides the means to access server functionality
remotely. Using XML Web services, businesses can expose programmatic interfaces
to their data or business logic, which in turn can be obtained and manipulated by
client and server applications.
An XML Web services enable the exchange of data in client-server or server-
server scenarios, using standards like HTTP and XML messaging to move data across
firewalls. XML Web services are not tied to a particular component technology or
object-calling convention. As a result, programs written in any language, using any
component model, and running on any operating system can access XML Web
services. Each of these models can take full advantage of all ASP.NET features, as
well as the power of the .NET Framework and .NET Framework common language
runtime. These features and how you can use them are outlined as follows.
If you have ASP development skills, the new ASP.NET programming model
will seem very familiar to you. However, the ASP.NET Object model has changed
significantly from ASP, making it more structured and object-oriented. Unfortunately
this means that ASP.NET is not fully backward compatible; almost all existing ASP
pages will have to be modified to some extent in order to run under ASP.NET.
In addition, major changes to Visual Basic .NET means that existing ASP
pages written with Visual Basic Scripting Edition typically will not port directly to
ASP.NET. In most cases, though, the necessary changes will involve only a few lines
of code. Accessing databases from ASP.NET Applications is an often-used technique
for displaying data to Web site visitors. ASP.NET makes it easier than ever to access
databases for this purpose it also allows you to manage the database from your code.
ASP.NET provides a simple model that enables Web developers to write logic
that runs at the application level. Developers can write this code in the Global .asax
text file or in a compiled class deployed as an assembly. This logic can include
application-level events, but developers can easily extend this model to suit the needs
of their Web application.
ASP.NET takes advantage of performance enhancements found in the .NET
Frame work and common language run time. Additionally, it has been designed to
offer significant performance improvements over ASP and other Web development
platforms. All ASP.NET code is compiled ,rather than interpreted, which allows early
binding, strong typing, and just-in-time (JIT) compilation to native code, to name only
a few of its benefits. ASP.NET is also easily factorable, meaning that developers can
remove modules (a session model, for instant) that are not relevant to the application
they are developing.
ASP.NET offers the Trace context class, which allows you to write custom
debug statements to your pages as you develop them. They appear only when you
have enabled tracing for a page or entire application. Enabling tracing also appends
details about a request to the page, or, if you so specify, to a custom trace viewer that
is stored in the root directory of your application.
The .NET Framework and ASP.NET provide default authorization and
authentication schemes for Web applications. You can easily remove, add to, or
replace these schemes, depending upon the needs of your application.
ASP.NET configuration settings are stored in XML-based files, which are
human readable and writable. Each of your applications can have a distinct
configuration file and you can extend the configuration scheme to suit your
requirements.
Applications are said to be running side by side when they are installed on the
same computer but use different versions of the .NET Framework. IS 6.0 uses a new
process model called worker process isolation mode, which is different from the
process model used in previous versions of IIS. ASP.NET uses this process model
default when running on windows server 2003.
ADO.NET:
ADO.NET provides consistent access to data sources such as Microsoft SQL
server, as well as data sources exposed through OLE DB and XML. Data-sharing
consumer applications can use ADO.NET to connect to these data source and retrieve,
manipulate update data.
ADO.NET cleanly factors data access from data manipulation into discrete
components that can be used separately or in tandem. ADO.NET includes .NET
Framework data providers for connecting to a database, executing commands and
retrieving results. Those results are either processed directly, or placed in an
ADO.NET Data Set object in order to be exposed to the used in an ad-hoc manner,
combined with data from multiple sources, or remote between tiers.
The ADO.NET classes are found in System. Data.dll, and are integrated with
the XML classes found in System.Xml.dll. When compiling code that uses the
system. Data namespace, reference both System.Data.dll and System.Xml.dll.
Data processing has traditionally relied primarily on a connection-based, two-
tier model. As data processing increasingly uses multi-tier architecture, programmers
are switching to a disconnected approach to provide better scalability for their
application.
SQL Server
SQL Server Enterprise: It is used in the high end, large scale and mission Critical
business. It provides High-end security, Advanced Analytics, Machine Learning, etc.
SQL Server Standard: It is suitable for Mid-Tier Application and Data marts. It includes basic
reporting and analytics.
SQL Server WEB: It is designed for a low total-cost-of-ownership option for Web hosters. It
provides scalability, affordability, and manageability capabilities for small to large scale Web
properties.
Let's have a look at the below early morning conversation between Mom and her Son, Tom.
Key Components and Services of SQL Server
Database Engine: This component handle storage, Rapid transaction Processing, and Securing
Data.
SQL Server: This service starts, stops, pauses, and continues an instance of Microsoft SQL
Server. Executable name is sqlservr.exe.
SQL Server Agent: It performs the role of Task Scheduler. It can be triggered by any event or
as per demand. Executable name is sqlagent.exe.
SQL Server Browser: This listens to the incoming request and connects to the desired SQL
server instance. Executable name is sqlbrowser.exe.
SQL Server Full-Text Search: This lets user running full-text queries against Character data in
SQL Tables. Executable name is fdlauncher.exe.
SQL Server VSS Writer: This allows backup and restoration of data files when the SQL server
is not running. Executable name is sqlwriter.exe.
SQL Server Analysis Services (SSAS): Provide Data analysis, Data mining and Machine
Learning capabilities. SQL server is integrated with R and Python language for advanced
analytics. Executable name is msmdsrv.exe.
SQL Server Reporting Services (SSRS): Provides reporting features and decision-making
capabilities. It includes integration with Hadoop. Executable name is
ReportingServicesService.exe
SQL Server Integration Services (SSIS): Provided Extract-Transform and Load capabilities of
the different type of data from one source to another. It can be view as converting raw
information into useful information. Executable name is MsDtsSrvr.exe
SQL Server allows you to run multiple services at a go, with each service having separate
logins, ports, databases, etc. These are divided into two:
Primary instances
Named instances.
There are two ways through which we may access the primary instance. First, we can use the
server name. Secondly, we can use its IP address. Named instances are accessed by appending a
backslash and instance name.
For example, to connect to an instance named xyx on the local server, you should use 127.0.0.1\
xyz. From SQL Server 2005 and above, you are allowed to run up to 50 instances
simultaneously on a server.
Note that even though you can have multiple instances on the same server, only one of them
must be the default instance while the rest must be named instances. One can run all the
instances concurrently, and each instance runs independent of the other instances.
You can have different versions of SQL Server on a single machine. Each installation works
independently from the other installations.
Instances can help us reduce the costs of operating SQL Server, especially in purchasing the
SQL Server license. You can get different services from different instances, hence no need for
purchasing one license for all services.
This is the main benefit of having many SQL Server instances on a single machine. You can use
different instances for development, production and test purposes.
When different services are running on different SQL Server instances, you can focus on
securing the instance running the most sensitive service.
A SQL Server instance can fail, leading to an outage of services. This explains the importance
of having a standby server to be brought in if the current server fails. This can easily be
achieved using SQL Server instances.
Summary:
C# History
History of C# language is interesting to know. Here we are going to discuss brief history of
C# language.
C# has evolved much since their first release in the year 2002. It was introduced with .NET
Framework 1.0 and the current version of C# is 5.0.
C# Features
C# is object oriented programming language. It provides a lot of features that are given
below.
1. Simple
2. Modern programming language
3. Object oriented
4. Type safe
5. Interoperability
6. Scalable and Updateable
7. Component oriented
8. Structured programming language
9. Rich Library
10. Fast speed
1) Simple
C# is a simple language in the sense that it provides structured approach (to break the
problem into parts), rich set of library functions, data types etc.
C# programming is based upon the current trend and it is very powerful and simple for
building scalable, interoperable and robust applications.
3) Object Oriented
4) Type Safe
C# type safe code can only access the memory location that it has permission to execute.
Therefore it improves a security of the program.
5) Interoperability
Interoperability process enables the C# programs to do almost anything that a native C++
application can do.
C# is automatic scalable and updateable programming language. For updating our application
we delete the old files and update them with new ones.
7) Component Oriented
C# is a structured programming language in the sense that we can break the program into
parts using functions. So, it is easy to understand and modify.
9) Rich Library
o Simple Example
o Using System
o Using namespace
C# Simple Example
1. class Program
2. {
3. static void Main(string[] args)
4. {
5. System.Console.WriteLine("Hello World!");
6. }
7. }
Output:
Hello World!
Description
Program: is the class name. A class is a blueprint or template from which objects are
created. It can have data members and methods. Here, it has only Main method.
static: is a keyword which means object is not required to access static members. So it saves
memory.
void: is the return type of the method. It does't return any value. In such case, return
statement is not required.
Main: is the method name. It is the entry point for any C# program. Whenever we run the C#
program, Main() method is invoked first before any other method. It represents start up of the
program.
string[] args: is used for command line arguments in C#. While running the C# program, we
can pass values. These values are known as arguments which we can use in the program.
If we write using System before the class, it means we don't need to specify System
namespace for accessing any class of this namespace. Here, we are using Console class
without specifying System.Console.
C# operators
An operator is simply a symbol that is used to perform operations. There can be many types
of operations like arithmetic, logical, bitwise etc.
o Arithmetic Operators
o Relational Operators
o Logical Operators
o Bitwise Operators
o Assignment Operators
o Unary Operators
o Ternary Operators
o Misc Operators
Precedence of Operators in C#
The precedence of operator specifies that which operator will be evaluated first and next. The
associativity specifies the operators direction to be evaluated, it may be left to right or right to
left.
The "data" variable will contain 35 because * (multiplicative operator) is evaluated before +
(additive operator).
C# Keywords
A keyword is a reserved word. You cannot use it as a variable name, constant name etc.
<td< td="" style="color: rgb(0, 0, 0); font-family: verdana, helvetica, arial, sans-serif; font-
size: 13px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-
weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-
transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-
width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-
decoration-color: initial;"></td>
Modules
1. Parallel Downloading
MODULES DESCSRIPTION:
Parallel Downloading
File is divided into k chunks of equal size and k si multaneous connections are used.
Client downloads a file from k peers at a time. Each peer sends a chunk to the client. Parallel
downloading is one of the ways to reduce the download time. In parallel download, a file F gets
divided into k chunks of equal size and single file is allowed to download in parallel with k
simultaneous connections. Parallel downloading is better than single downloading. In the network
with single one user, parallel downloading may not reduce the download time up to the mark. If we
can make the chunk-size proportional to the service capacity of each source peer, parallel
downloading can produce a b etter result. But such scheme requires global information of the network
Instead of waiting to get the complete chunk, we randomly switch between the source peers based on
time. File is divided into many chunks and user downloads chunks sequentially one at time.
The client randomly chooses the source peer at each time slot and download the chunks from
each peer in the given time slots. There are two schemes in this method
(i) Permanent Connection: when the downloader wishes to download a file, the downloader
will choose one of the given source peers randomly with equal probability.
(ii) Random Periodic Switching: here the downloader randomly chooses a s ource peer at
each time slot, independently of everything else. It is observed that both the spatial
heterogeneity and the temporal correlation in the service capacity can significantly
increase the average download time of the users in the network.
Dynamically Distributed Parallel Periodic Switching (D2PS) that effectively removes correlations in
the capacity fluctuation and the heterogeneity in space, thus greatly reducing the average download
time. There are two schemes in this method.
1) Parallel Permanent Connection: here the downloader randomly chooses multiple ‘k’ source peers
over ‘N’ possible source peers and it make permanent connection for the fixed time slot ‘t’.
2) Parallel Random Periodic Switching: here the downloader randomly chooses multiple fixed k
source peers over N possible source peers and it makes parallel connection with that k source peers
for each randomly selected time slot.
After the downloader has received the control packet, we decide the chunk size of the peers. The
downloader will be connected to no. of peers in the group and the downloader will be downloading
the file in parallel from these different peers. If bandwidth available is increased then downloading
can complete before specified time. If bandwidth available is decreased then downloader will search
another peer with good bandwidth and get it replaced. After downloading all chunks from the all
sources, the system will check whether the entire file got downloaded or not.
Dataflow Diagram
Implementation
namespace P2p_Using_Gnutella
/// <summary>
/// </summary>
/// <summary>
/// </summary>
components.Dispose();
base.Dispose(disposing);
/// </summary>
this.panel1.SuspendLayout();
this.panel2.SuspendLayout();
this.groupBox1.SuspendLayout();
this.SuspendLayout();
//
// listBox1
//
this.listBox1.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(224)))),
((int)(((byte)(224)))), ((int)(((byte)(224)))));
this.listBox1.BorderStyle = System.Windows.Forms.BorderStyle.None;
this.listBox1.FormattingEnabled = true;
this.listBox1.Name = "listBox1";
this.listBox1.TabIndex = 0;
this.listBox1.SelectedIndexChanged += new
System.EventHandler(this.listBox1_SelectedIndexChanged);
//
// button1
//
this.button1.Name = "button1";
this.button1.TabIndex = 2;
this.button1.Text = "open";
this.button1.UseVisualStyleBackColor = true;
//
// button2
//
this.button2.Name = "button2";
this.button2.TabIndex = 3;
this.button2.Text = "save";
this.button2.UseVisualStyleBackColor = true;
//
// textBox2
//
this.textBox2.Name = "textBox2";
this.textBox2.TabIndex = 5;
//
// button3
//
this.button3.Name = "button3";
this.button3.TabIndex = 6;
this.button3.Text = "search";
this.button3.UseVisualStyleBackColor = true;
//
// textBox3
//
this.textBox3.Name = "textBox3";
this.textBox3.TabIndex = 8;
//
// listBox2
//
this.listBox2.FormattingEnabled = true;
this.listBox2.Name = "listBox2";
this.listBox2.TabIndex = 1;
this.listBox2.SelectedIndexChanged += new
System.EventHandler(this.listBox2_SelectedIndexChanged);
//
// panel1
//
this.panel1.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(192)))),
((int)(((byte)(192)))), ((int)(((byte)(255)))));
this.panel1.Controls.Add(this.label2);
this.panel1.Controls.Add(this.textBox2);
this.panel1.Controls.Add(this.button3);
this.panel1.Name = "panel1";
this.panel1.TabIndex = 9;
//
// label1
//
this.label1.AutoSize = true;
this.label1.Name = "label1";
this.label1.TabIndex = 10;
//
// panel2
//
this.panel2.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(224)))),
((int)(((byte)(224)))), ((int)(((byte)(224)))));
this.panel2.Controls.Add(this.label1);
this.panel2.Name = "panel2";
this.panel2.TabIndex = 11;
//
// button4
//
this.button4.Name = "button4";
this.button4.TabIndex = 12;
this.button4.Text = "Cancel";
this.button4.UseVisualStyleBackColor = true;
//
// groupBox1
//
this.groupBox1.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(192)))),
((int)(((byte)(192)))), ((int)(((byte)(255)))));
this.groupBox1.Controls.Add(this.richTextBox1);
this.groupBox1.Controls.Add(this.button4);
this.groupBox1.Controls.Add(this.listBox2);
this.groupBox1.Controls.Add(this.button2);
this.groupBox1.Controls.Add(this.button1);
this.groupBox1.Controls.Add(this.textBox3);
this.groupBox1.Name = "groupBox1";
this.groupBox1.TabIndex = 15;
this.groupBox1.TabStop = false;
//
// label2
//
this.label2.AutoSize = true;
this.label2.Name = "label2";
this.label2.TabIndex = 7;
//
// richTextBox1
//
this.richTextBox1.TabIndex = 13;
this.richTextBox1.Text = "";
this.richTextBox1.TextChanged += new
System.EventHandler(this.richTextBox1_TextChanged);
//
// downloadfiles
//
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.BackColor = System.Drawing.Color.White;
this.BackgroundImage = ((System.Drawing.Image)
(resources.GetObject("$this.BackgroundImage")));
this.Controls.Add(this.groupBox1);
this.Controls.Add(this.panel2);
this.Controls.Add(this.panel1);
this.Controls.Add(this.listBox1);
this.Name = "downloadfiles";
this.Text = "downloadfiles";
this.WindowState = System.Windows.Forms.FormWindowState.Maximized;
this.panel1.ResumeLayout(false);
this.panel1.PerformLayout();
this.panel2.ResumeLayout(false);
this.panel2.PerformLayout();
this.groupBox1.ResumeLayout(false);
this.groupBox1.PerformLayout();
this.ResumeLayout(false);
#endregion
namespace P2p_Using_Gnutella
/// <summary>
/// </summary>
/// <summary>
/// </summary>
components.Dispose();
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// </summary>
((System.ComponentModel.ISupportInitialize)(this.dataGridView1)).BeginInit();
this.panel1.SuspendLayout();
this.SuspendLayout();
//
// dataGridView1
//
this.dataGridView1.AutoSizeColumnsMode =
System.Windows.Forms.DataGridViewAutoSizeColumnsMode.Fill;
this.dataGridView1.BackgroundColor = System.Drawing.Color.White;
this.dataGridView1.ColumnHeadersHeightSizeMode =
System.Windows.Forms.DataGridViewColumnHeadersHeightSizeMode.AutoSize;
this.dataGridView1.Name = "dataGridView1";
this.dataGridView1.TabIndex = 3;
//
// textBox1
//
this.textBox1.Name = "textBox1";
this.textBox1.TabIndex = 4;
//
// button1
//
this.button1.ForeColor = System.Drawing.Color.Blue;
this.button1.Name = "button1";
this.button1.TabIndex = 5;
this.button1.Text = "Browse";
this.button1.UseVisualStyleBackColor = true;
//
// button2
//
this.button2.ForeColor = System.Drawing.Color.Blue;
this.button2.Name = "button2";
this.button2.TabIndex = 6;
this.button2.Text = "Upload";
this.button2.UseVisualStyleBackColor = true;
//
// listBox1
//
this.listBox1.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(224)))),
((int)(((byte)(224)))), ((int)(((byte)(224)))));
this.listBox1.BorderStyle = System.Windows.Forms.BorderStyle.None;
this.listBox1.FormattingEnabled = true;
this.listBox1.Location = new System.Drawing.Point(-1, 203);
this.listBox1.Name = "listBox1";
this.listBox1.TabIndex = 7;
this.listBox1.SelectedIndexChanged += new
System.EventHandler(this.listBox1_SelectedIndexChanged);
//
// textBox2
//
this.textBox2.Name = "textBox2";
this.textBox2.TabIndex = 8;
this.textBox2.TextChanged += new
System.EventHandler(this.textBox2_TextChanged);
//
// button3
//
this.button3.ForeColor = System.Drawing.Color.Blue;
this.button3.Name = "button3";
this.button3.TabIndex = 9;
this.button3.Text = "search";
this.button3.UseVisualStyleBackColor = true;
this.button3.Click += new System.EventHandler(this.button3_Click);
//
// panel1
//
this.panel1.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(192)))),
((int)(((byte)(192)))), ((int)(((byte)(255)))));
this.panel1.Controls.Add(this.label2);
this.panel1.Controls.Add(this.textBox2);
this.panel1.Controls.Add(this.button3);
this.panel1.Name = "panel1";
this.panel1.TabIndex = 10;
//
// label1
//
this.label1.AutoSize = true;
this.label1.ForeColor = System.Drawing.Color.Blue;
this.label1.Name = "label1";
this.label1.TabIndex = 11;
// label2
//
this.label2.AutoSize = true;
this.label2.Name = "label2";
this.label2.TabIndex = 10;
//
// Filesharing
//
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.Controls.Add(this.button2);
this.Controls.Add(this.textBox1);
this.Controls.Add(this.label1);
this.Controls.Add(this.button1);
this.Controls.Add(this.panel1);
this.Controls.Add(this.listBox1);
this.Controls.Add(this.dataGridView1);
this.Name = "Filesharing";
this.Text = "Filesharing";
this.WindowState = System.Windows.Forms.FormWindowState.Maximized;
((System.ComponentModel.ISupportInitialize)(this.dataGridView1)).EndInit();
this.panel1.ResumeLayout(false);
this.panel1.PerformLayout();
this.ResumeLayout(false);
this.PerformLayout();
#endregion
}
namespace P2p_Using_Gnutella
/// <summary>
/// </summary>
/// <summary>
/// </summary>
components.Dispose();
base.Dispose(disposing);
/// </summary>
this.groupBox1.SuspendLayout();
this.SuspendLayout();
//
// panel1
//
this.panel1.BackgroundImage = ((System.Drawing.Image)
(resources.GetObject("panel1.BackgroundImage")));
this.panel1.Name = "panel1";
this.panel1.Size = new System.Drawing.Size(291, 218);
this.panel1.TabIndex = 0;
//
// groupBox1
//
this.groupBox1.BackColor = System.Drawing.Color.Black;
this.groupBox1.Controls.Add(this.button1);
this.groupBox1.Controls.Add(this.label2);
this.groupBox1.Controls.Add(this.textBox2);
this.groupBox1.Controls.Add(this.textBox1);
this.groupBox1.Controls.Add(this.label1);
this.groupBox1.Name = "groupBox1";
this.groupBox1.TabIndex = 1;
this.groupBox1.TabStop = false;
this.groupBox1.Text = "Login";
//
// button1
//
this.button1.Name = "button1";
this.button1.TabIndex = 4;
this.button1.Text = "Submit";
this.button1.UseVisualStyleBackColor = true;
//
// label2
//
this.label2.AutoSize = true;
this.label2.ForeColor = System.Drawing.Color.White;
this.label2.Name = "label2";
this.label2.TabIndex = 3;
this.label2.Text = "Password";
//
// textBox2
//
this.textBox2.Name = "textBox2";
this.textBox2.PasswordChar = '*';
this.textBox2.TabIndex = 2;
//
// textBox1
//
this.textBox1.Location = new System.Drawing.Point(112, 20);
this.textBox1.Name = "textBox1";
this.textBox1.TabIndex = 1;
//
// label1
//
this.label1.AutoSize = true;
this.label1.ForeColor = System.Drawing.Color.White;
this.label1.Name = "label1";
this.label1.TabIndex = 0;
//
// login
//
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.Controls.Add(this.groupBox1);
this.Controls.Add(this.panel1);
this.Name = "login";
this.StartPosition = System.Windows.Forms.FormStartPosition.CenterScreen;
this.Text = "login";
this.groupBox1.ResumeLayout(false);
this.groupBox1.PerformLayout();
this.ResumeLayout(false);
#endregion
Testing
White-box testing (also known as clear box testing, glass box testing, transparent
box testing, and structural testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality (i.e. black-box
testing). In white-box testing an internal perspective of the system, as well as programming
skills, are used to design test cases. The tester chooses inputs to exercise paths through the
code and determine the appropriate outputs. This is analogous to testing nodes in a circuit,
e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of
the software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between subsystems during a system–level test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.
These White-box testing techniques are the building blocks of white-box testing, whose
essence is the careful testing of the application at the source code level to prevent any hidden
errors later on. These different techniques exercise every visible path of the source code to
minimize errors and create an error-free environment. The whole point of white-box testing is
the ability to know which line of the code is being executed and being able to identify what
the correct output should be.
Levels
1. Unit testing. White-box testing is done during unit testing to ensure that the code is
working as intended, before any integration happens with previously tested code.
White-box testing during unit testing catches any defects early on and aids in any
defects that happen later on after the code is integrated with the rest of the application
and therefore prevents any type of errors later on.
2. Integration testing. White-box testing at this level are written to test the interactions of
each interface with each other. The Unit level testing made sure that each code was
tested and working accordingly in an isolated environment and integration examines
the correctness of the behavior in an open environment through the use of white-box
testing for any interactions of interfaces that are known to the programmer.
3. Regression testing. White-box testing during regression testing is the use of recycled
white-box test cases at the unit and integration testing levels.
White-box testing's basic procedures involve the understanding of the source code that
you are testing at a deep level to be able to test them. The programmer must have a deep
understanding of the application to know what kinds of test cases to create so that every
visible path is exercised for testing. Once the source code is understood then the source code
can be analysed for test cases to be created. These are the three basic steps that white-box
testing takes in order to create test cases:
1. Input, involves different types of requirements, functional specifications, detailed
designing of documents, proper source code, security specifications. This is the
preparation stage of white-box testing to layout all of the basic information.
2. Processing Unit, involves performing risk analysis to guide whole testing process,
proper test plan, execute test cases and communicate results. This is the phase of
building test cases to make sure they thoroughly test the application the given results
are recorded accordingly.
3. Output; prepare final report that encompasses all of the above preparations and
results.
Black Box Testing
Test procedures
Test cases
Test cases are built around specifications and requirements, i.e., what the application
is supposed to do. Test cases are generally derived from external descriptions of the software,
including specifications, requirements and design parameters. Although the tests used are
primarily functional in nature, non-functional tests may also be used. The test designer selects
both valid and invalid inputs and determines the correct output without any knowledge of the
test object's internal structure.
Ideally, each test case is independent from the others. Substitutes such as method
stubs, mock objects, fakes, and test harnesses can be used to assist testing a module in
isolation. Unit tests are typically written and run by software developers to ensure that code
meets its design and behaves as intended. Its implementation can vary from being very
manual (pencil and paper)to being formalized as part of build automation.
Testing will not catch every error in the program, since it cannot evaluate every
execution path in any but the most trivial programs. The same is true for unit testing.
Additionally, unit testing by definition only tests the functionality of the units themselves.
Therefore, it will not catch integration errors or broader system-level errors (such as functions
performed across multiple units, or non-functional test areas such as performance).
Unit testing should be done in conjunction with other software testing activities, as
they can only show the presence or absence of particular errors; they cannot prove a complete
absence of errors. In order to guarantee correct behaviour for every execution path and every
possible input, and ensure the absence of errors, other techniques are required, namely the
application of formal methods to proving that a software component has no unexpected
behaviour.
Software testing is a combinatorial problem. For example, every Boolean decision statement
requires at least two tests: one with an outcome of "true" and one with an outcome of "false".
As a result, for every line of code written, programmers often need 3 to 5 lines of test code.
This obviously takes time and its investment may not be worth the effort. There are
also many problems that cannot easily be tested at all – for example those that
are nondeterministic or involve multiple threads. In addition, code for a unit test is likely to
be at least as buggy as the code it is testing. Fred Brooks in The Mythical Man-
Month quotes: never take two chronometers to sea. Always take one or three. Meaning, if
two chronometers contradict, how do you know which one is correct?
Another challenge related to writing the unit tests is the difficulty of setting up
realistic and useful tests. It is necessary to create relevant initial conditions so the part of the
application being tested behaves like part of the complete system. If these initial conditions
are not set correctly, the test will not be exercising the code in a realistic context, which
diminishes the value and accuracy of unit test results.
To obtain the intended benefits from unit testing, rigorous discipline is needed
throughout the software development process. It is essential to keep careful records not only
of the tests that have been performed, but also of all changes that have been made to the
source code of this or any other unit in the software. Use of a version control system is
essential. If a later version of the unit fails a particular test that it had previously passed, the
version-control software can provide a list of the source code changes (if any) that have been
applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case
failures are reviewed daily and addressed immediately if such a process is not implemented
and ingrained into the team's workflow, the application will evolve out of sync with the unit
test suite, increasing false positives and reducing the effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Since the
software is being developed on a different platform than the one it will eventually run on, you
cannot readily run a test program in the actual deployment environment, as is possible with
desktop programs.
Functional testing
Functional testing is a quality assurance (QA) process and a type of black box
testing that bases its test cases on the specifications of the software component under test.
Functions are tested by feeding them input and examining the output, and internal program
structure is rarely considered (not like in white-box testing). Functional Testing usually
describes what the system does.
Functional testing differs from system testing in that functional testing "verifies a program by
checking it against ... design document(s) or specification(s)", while system testing
"validate a program by checking it against the published user or system requirements"
(Kaner, Falk, Nguyen 1999, p. 52).
Functional testing typically involves five steps .The identification of functions that the
software is expected to perform
Testing types
Load testing
Load testing is the simplest form of performance testing. A load test is usually
conducted to understand the behaviour of the system under a specific expected load. This
load can be the expected concurrent number of users on the application performing a specific
number of transactions within the set duration. This test will give out the response times of all
the important business critical transactions. If the database, application server, etc. are also
monitored, then this simple test can itself point towards bottlenecks in the application
software.
Stress testing
Stress testing is normally used to understand the upper limits of capacity within the
system. This kind of test is done to determine the system's robustness in terms of extreme
load and helps application administrators to determine if the system will perform sufficiently
if the current load goes well above the expected maximum.
Soak testing
Soak testing, also known as endurance testing, is usually done to determine if the
system can sustain the continuous expected load. During soak tests, memory utilization is
monitored to detect potential leaks. Also important, but often overlooked is performance
degradation. That is, to ensure that the throughput and/or response times after some long
period of sustained activity are as good as or better than at the beginning of the test. It
essentially involves applying a significant load to a system for an extended, significant period
of time. The goal is to discover how the system behaves under sustained use.
Spike testing
Spike testing is done by suddenly increasing the number of or load generated by,
users by a very large amount and observing the behaviour of the system. The goal is to
determine whether performance will suffer, the system will fail, or it will be able to handle
dramatic changes in load.
Configuration testing
Rather than testing for performance from the perspective of load, tests are created to
determine the effects of configuration changes to the system's components on the system's
performance and behaviour. A common example would be experimenting with different
methods of load-balancing.
Isolation testing
Isolation testing is not unique to performance testing but involves repeating a test
execution that resulted in a system problem. Often used to isolate and confirm the fault
domain.
Integration testing
Integration testing (sometimes called integration and testing, abbreviated I&T) is
the phase in software testing in which individual software modules are combined and tested
as a group. It occurs after unit testing and before validation testing. Integration testing takes
as its input modules that have been unit tested, groups them in larger aggregates, applies tests
defined in an integration test plan to those aggregates, and delivers as its output the integrated
system ready for system testing.
Purpose
Test cases are constructed to test whether all the components within assemblages
interact correctly, for example across procedure calls or process activations, and this is done
after testing individual modules, i.e. unit testing. The overall idea is a "building block"
approach, in which verified assemblages are added to a verified base which is then used to
support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up.
Other Integration Patterns are: Collaboration Integration, Backbone Integration, Layer
Integration, Client/Server Integration, Distributed Services Integration and High-frequency
Integration.
Big Bang
In this approach, all or most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration testing.
The Big Bang method is very effective for saving time in the integration testing process.
However, if the test cases and their results are not recorded properly, the entire integration
process will be more complicated and may prevent the testing team from achieving the goal
of integration testing.
A type of Big Bang Integration testing is called Usage Model testing. Usage Model
Testing can be used in both software and hardware integration testing. The basis behind this
type of integration testing is to run user-like workloads in integrated user-like environments.
In doing the testing in this manner, the environment is proofed, while the individual
components are proofed indirectly through their use.
For integration testing, Usage Model testing can be more efficient and provides better
test coverage than traditional focused functional integration testing. To be more efficient and
accurate, care must be used in defining the user-like workloads for creating realistic scenarios
in exercising the environment. This gives confidence that the integrated environment will
work as expected for the target customers.
All the bottom or low-level modules, procedures or functions are integrated and then
tested. After the integration testing of lower level integrated modules, the next level of
modules will be formed and can be used for integration testing. This approach is helpful only
when all or most of the modules of the same development level are ready. This method also
helps to determine the levels of software developed and makes it easier to report testing
progress in the form of a percentage.
Top Down Testing is an approach to integrated testing where the top integrated
modules are tested and the branch of the module is tested step by step until the end of the
related module.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With
Top-Down, it is easier to find a missing branch link.
Validation
In that case, you can use the CustomValidator control. The CustomValidator control in Java
allows you to create your own custom logic to validate user data. The CustomValidator
Control can be used on the client side and server side. JavaScript is used for client validation;
you can use any. Some of the popular Java validators are RequiredFieldValidator,
CompareValidator, RangeValidator, RegularExpressionValidator, ValidationSummary, and
CustomValidator. Server-side validation helps prevent users from bypassing validation by
disabling or changing the client script. Security Note: By default, Java Web pages
automatically validate that malicious users are not attempting to send script or HTML
elements to your application.
Introduction
This article explains validation in the Web API. Here I show the step-by-step procedure to
create the application using validation.
Validation
How validation works is like testing to determine that what the user entered in the field is
valid. After validating the entered field, it checks that it is in the correct format, specified
length, and you can compare the input value in various fields or against values. We can use
the validation for various types of information. That will be explained by the sample
application.
Types of Validation
Here we define the various types of validation that we can be use in our application. These
are as follows:
1. Required entry: It ensures the required field. The user cannot skip the entry.
2. Compare Value: It is ensures that the comparison of the user's entry with the constant
value or against the value of another constant or a specific data type. We use the
comparison operator like equal, greater than, less than.
3. Range checking: It checks the range of the input values with the minimum and
maximum range that is required for the input value. We can use the range checker with
the pairs of numbers, dates and alphabetic characters.
4. Pattern matching: It is used for checking the pattern of an input value that specifies the
sequence of characters.
5. Remote: It is used for checking whether the value exists on the server side.
System Maintenance
1. The implementation processes contains software preparation and transition activities, such
as the conception and creation of the implementation plan, the preparation for handling
problems identified during development, and the follow-up medical records management,
2. The problem and modification analysis process, which is executed once the applications
has become the responsibility of the implementation group.
4. The process acceptance of the modification, by confirming the modified work with the
individual who submitted the request in order to make sure the modification provided a
solution.
5. Finally, the last implementation process, also an event which does not occur on daily basis,
is the retirement of a piece of software.
Snapshot
Conclusion
We propose a social network-based P2P cOntent file sharing system in disconnected
mObile ad hoc Networks. SPOON considers both node interest and contact frequency for
efficient file sharing. We introduce four main components of SPOON: Interest extraction
identifies nodes’ interests; Community construction builds common-interest nodes with
frequent contacts into communities. The node role assignment component exploits nodes with
tight connection with community members for intracommunity file searching and highly
mobile nodes that visit external communities frequently for intercommunity file searching;
The interest-oriented file searching scheme selects forwarding nodes for queries based on
interest similarities. SPOON also incorporates additional strategies for file prefetching,
querying-completion, and loop-prevention, and node churn consideration to further enhance
file searching efficiency. The system deployment on the real-world GENI Orbit platform and
the trace-driven experiments prove the efficiency of SPOON.
Future Enhancement
In future, we will explore how to determine appropriate thresholds in SPOON, how they
affect the file sharing efficiency, and how to adapt SPOON to larger and more disconnected
networks.
Reference
[3] Stefan Saroiu, P. Krishna Gummadi, Steven D. Gribble, ”Measurement study of peer-to-
peer file sharing systems,” Proc. SPIE 4673, Multimedia Computing and Networking 2002;
[4] Atul Adya, William J. Bolosky, Miguel Castro, Gerald Cermak, Ron- nie Chaiken, John
R. Douceur, Jon Howell, Jacob R. Lorch, Marvin Theimer, and Roger P. Wattenhofer. 2002.
Farsite: federated, avail- able, and reliable storage for an incompletely trusted environment.
SIGOPS Oper. Syst. Rev. 36, SI (December 2002), pp. 1-14. DOI:
https://doi.org/10.1145/844128.844130
[5] William J. Bolosky, John R. Douceur, and Jon Howell. 2007. The Farsite project: a
retrospective. SIGOPS Oper. Syst. Rev. 41, 2 (April 2007), pp. 17-26.
DOI=http://dx.doi.org/10.1145/1243418.1243422
[6] Karagiannis, Thomas, Riverside Andre Broido, Nevil Brownlee, claffy Caida and
Michalis Faloutsos. File-sharing in the Internet: A characterization of P2P traffic in the
backbone. (2003). DOI =
https://www.microsoft.com/en-us/research/wpcontent/uploads/2016/02/tech.pdf
[7] Dongyu Qiu and R. Srikant. 2004. Modeling and performance analysis of BitTorrent-like
peer-to-peer networks. In Proceedings of the 2004 conference on Applications, technologies,
architectures, and protocols for computer communications (SIGCOMM ’04). ACM, New
York, NY, USA, 367-378. DOI: https://doi.org/10.1145/1015467.1015508.
Web Reference
https://www.javatpoint.com/
https://www.w3schools.com/
https://stackoverflow.com/
https://www.tutorialspoint.com/