0% found this document useful (0 votes)
94 views73 pages

Goms

The document outlines a proposed third party auditing scheme for secure cloud storage that supports anonymous authentication. It describes the system specification including software requirements of ASP.Net and SQL Server, and hardware requirements. It then discusses the system design including modules for login/registration, encryption, file uploading, decryption and downloading. Key features of the .NET framework are also summarized such as the common language runtime, managed code, common type system and class library.

Uploaded by

pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views73 pages

Goms

The document outlines a proposed third party auditing scheme for secure cloud storage that supports anonymous authentication. It describes the system specification including software requirements of ASP.Net and SQL Server, and hardware requirements. It then discusses the system design including modules for login/registration, encryption, file uploading, decryption and downloading. Key features of the .NET framework are also summarized such as the common language runtime, managed code, common type system and class library.

Uploaded by

pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 73

CONTENTS

S.NO TITLE PAGE

1. INTRODUCTION
1.1 Abstract
1.2 Project overview and module description
2. SYSTEM SPECIFICATION
2.1 Software Specification
2.2 hardware specification
2.3 software description
3. SYSTEM ANALYSIS
3.2 Existing System
3.3 Proposed System
4. SYSTEM DESIGN
4.1 system design
4.2 dataflow diagram
5. SYSTEM IMPLEMENTATION&TESTING
5.1 System Implementation
5.2 System testing
6. APPENDIX
6.1 Source code
6.2 Screenshots
7. FUTURE ENHANCEMENTS
8. CONCLUSION
9. BIBLIOGRAPHY

1
2
THIRD PARTY AUDITING SCHEME FOR CLOUD
STORAGE

1.1 ABSTRACT

To propose a third party auditing scheme for secure data storage in clouds, that
supports anonymous authentication. In the proposed scheme, the cloud verifies the
authenticity of the user without knowing the user’s identity before storing data. Our scheme
also has the added feature of access control in which only valid users are able to decrypt the
stored information. The scheme prevents replay attacks and supports creation, modification,
and reading data stored in the cloud. We also address user revocation. Moreover, our
authentication and access control scheme is decentralized and robust, unlike other access
control schemes designed for clouds which are centralized. The communication,
computation, and storage overheads are comparable to centralized approaches.
1.2 PROJECT OVERVIEW AND MODULE DESCRIPTION:
As Cloud Computing becomes prevalent, more and more sensitive information are
being centralized into the cloud, such as emails, personal health records, government
documents, etc. By storing their data into the cloud, the data owners can be relieved from
the burden of data storage and maintenance so as to enjoy the on-demand high quality data
storage service. However, the fact that data owners and cloud server are not in the same
trusted domain may put the outsourced data at risk, as the cloud server may no longer be
fully trusted. It follows that sensitive data usually should be encrypted prior to outsourcing
for data privacy and combating unsolicited accesses. However, data encryption makes
effective data utilization a very challenging task given that there could be a large amount of
outsourced data files. Moreover, in Cloud Computing, data owners may share their
outsourced data with a large number of users. The individual users might want to only
retrieve certain specific data files they are interested in during a given session. One of the
most popular ways is to selectively retrieve files through keyword-based search instead of
retrieving all the encrypted files back which is completely impractical in cloud computing
scenarios. Such keyword-based search technique allows users to selectively retrieve files of
interest and has been widely applied in plaintext search scenarios, such as Google search.
Unfortunately, data encryption restricts user’s ability to perform keyword search and thus
makes the traditional plaintext search methods unsuitable for Cloud Computing. Although
encryption of keywords can protect keyword privacy, it further renders the traditional
plaintext search techniques useless in this scenario.

4
In this paper, the mainstay of this is to propose a new decentralized access control
scheme for secure data storage in clouds that supports anonymous authentication. The
proposed scheme is resilient to replay attacks. A writer whose attributes and keys have been
revoked cannot write back stale information. The cloud verifies the authenticity of the user
without knowing the user’s identity before storing data. The scheme also has the added
feature of access control in which only valid users are able to decrypt the stored information.
The scheme prevents replay attacks and supports creation, modification, and reading data
stored in the cloud
MODULE DESCRIPTION:

LOGIN AND REGISTRATION

The user have to register them as valid user (creator, reader).

Only valid users can access the cloud storage.

ENCRYPTION
The information about creator are stored in image using steganography.
The goal of steganography is to hide messages in such a way that no one apart from the
intended recipient even knows that a message has been sent.

FILE UPLOADING
The encrypted information about the creator can be uploaded.
Creator can upload the documents in any type such as jpg, audio, video, zip, docx, etc.
Creator can view, edit and download the uploaded documents if required.

DECRYPTION
If the trustee wants to know about the creator personal information, trustee can decrypt
the information after getting the image file and password from the creator.
Trustee get the decryption key from the creator, after getting the key trustee can decrypt
the creator’s information.

DOWNLOADING

The stored file can be downloaded by the valid user.


2.1 SOFTWARE SPECIFICATION:

Operating system : - Windows XP.

Coding Language : ASP.Net

Data Base : SQL Server 2005

2.2 HARDWARE SPECIFICATION:

System : Pentium IV 2.4 GHz.

Hard Disk : 40GB

Floppy Drive : 1.44MB

Monitor : 15 VGA Color.

Mouse : Logitech.

Ram : 512M
2.3 SOFTWARE DESCRIPTIONS

Features of .Net
Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily
and securely interoperate. There’s no language barrier with .NET: there are numerous
languages available to the developer including Managed C++, C#, Visual Basic and Java
Script. The .NET framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate.

“.NET” is also the collective name given to various software components built upon the

.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment
within which programs run. The most important features are

 Conversion from a low-level assembler-style language, called Intermediate


Language (IL), Finto code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such
9 features.
 The following features of the .NET framework are also worth description.

Managed Code
The code that targets .NET, and which contains certain extra Information
– metadata” - to describe itself. Whilst both managed and unmanaged code can
run in the runtime, only managed code contains the information that allows the
CLR to guarantee, for instance, safe execution and interoperability.

Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and
Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do
not. Targeting CLR can, depending on the language you’re using, impose certain constraints
on the features available. As with managed and unmanaged code, one can have both
managed and unmanaged data in .NET applications - data that doesn’t get garbage collected
but instead is looked after by unmanaged code.

Common Type System


The CLR uses something called the Common Type System (CTS) to strictly enforce
type- safety. This ensures that all classes are compatible with each other, by describing
types in a common way. CTS define how types work within the runtime, which enables
types in one language to interoperate with types in another language, including cross-
language exception handling. As well as ensuring that types are only used in appropriate
ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been
allocated to it.

Common Language Specification


The CLR provides built-in support for language interoperability. To ensure that you
can develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common Language
Specification (CLS) has been defined. Components that follow these rules and expose only
CLS features are considered CLS-compliant.

THE CLASS LIBRARY


10
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte, Double,
Boolean, and String, as well as Object. All objects derive from System. Object. As well as
objects, there are value types. Value types can be allocated on the stack, which can provide
useful flexibility. There are also efficient means of converting value types to object types if
and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there
are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic also now
supports structured exception handling, custom attributes and also supports multi-
threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of
migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is just the
11
grammar of the language. It has no standard library of its own, and instead has been
designed with the intention of using the .NET libraries as its own

Microsoft Visual J# .NET provides the easiest transition for Java-language


developers into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a variety of other
programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the
Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev
Kit.

Other languages for which .NET compilers are available include

 FORTRAN
 COBOL
 Eiffel
Fig1 .Net Framework

ASP.NET Windows Forms

XML WEB SERVICES


Base Class Libraries
Common Language Runtime
Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that


created in C#.NET can be used in any other CLS-compliant language. In addition, we
can use objects, classes, and components created in other CLS-compliant languages in
12 C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:


Constructors are used to initialize objects, whereas destructors are used to
destroy them. In other words, destructors are used to release the resources allocated to
the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure
is used to complete the tasks that must be performed when an object is destroyed. The
sub finalize procedure is called automatically when an object is destroyed. In addition,
the sub finalize procedure can be called only from the class it belongs to or from derived
classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework
monitors allocated resources, such as objects and variables. In addition, the .NET
Framework automatically releases memory for reuse by destroying objects that are no
longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently
in use by applications. When the garbage collector comes across an object that is
marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING
Overloading is another feature in C#. Overloading enables us to define
multiple procedures with the same name, where each procedure has a different set of
arguments. Besides using overloading for procedures, we can use it for constructors
and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports


multithreading can handle multiple tasks simultaneously, we can use multithreading to
decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING


C#.NET supports structured handling, which enables us to detect and remove
errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to
create exception handlers. Using Try…Catch…Finally statements, we can create
robust and effective exception handlers to improve the performance of our
application.
13
THE .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application
development in the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

To provide a consistent object-oriented programming environment whether object codes is


stored and executed locally on Internet-distributed, or executed remotely.

1. To provide a code-execution environment to minimizes software


deployment and guarantees safe execution of code.

2. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and


Web-based applications.

Features of SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL
Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta
Data Services. The term repository is used only in reference to the repository engine
within Meta Data Services

SQL-SERVER database consist of six type of objects, They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.


14
VIEWS OF TABLE

We can work with a table in two types,

1. Design View

15
Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can
specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view
mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers
the question from one or more table. The data that make up the answer is either dynast (if
you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest
information in the dynast. Access either displays the dynast or snapshot for us to view or
perform an action on it, such as deleting or updating.
3.1 EXISTING SYSTEM:

➢ Access control in cloud are centralized in nature.

➢ Single KDC is not only a single point of failure but difficult to maintain
because of the large number of users that are supported in a cloud
environment

➢ Only one user can create and store a file and other users can only read the file.

➢ Write access was not permitted to users other than the creator.

3.2 PROPOSED SYSTEM:

➢ Third party auditing scheme for secure data storage in clouds, that supports
anonymous authentication.

➢ The architecture is decentralized, meaning that there can be several KDCs for key
management

➢ Distributed access control of data stored in cloud so that only


authorized users with valid attributes can access them.
➢ Truste have to get permission from creator to reveal creator’s details.

➢ Trustee have the privilege to delete the creator’s records.

➢ Writer have to get permission from the creator to alter the stored data.

➢ Trustee does not know about creator’s detail.

➢ Without creator consent, Trustee cannot reveal creator’s information.


20
4.1 SYSTEM DESIGN

INPUT DESIGN

The input design is the link between the information system and the user. It comprises
the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it can occur by
having people keying the data directly into the system. The design of input focuses on
controlling the amount of input required, controlling the errors, avoiding delay, avoiding
extra steps and keeping the process simple. The input is designed in such a way so that it
provides security and ease of use with retaining the privacy. Input Design considered the
following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

1.Input Design is the process of converting a user-oriented description of the input


into a computer- based system. This design is important to avoid errors in the data input
process and show the correct direction to the management for getting correct information
from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free from
errors. The data entry screen is designed in such a way that all the data manipulates can be
performed. It also provides record viewing facilities.

3.When the data is entered it will check for its validity. Data can be entered with the
help of screens.
Appropriate messages are provided as when needed so that the user

will not be in maize of instant. Thus the objective of input design is to create an input
layout that is easy to follow.

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users
and to other system through outputs. In output design it is determined how the information
is to be displaced for immediate need and also the hard copy output. It is the most important
and direct source information to the user. Efficient and intelligent output design improves
the system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output element is
designed so that people will find the system can use easily and effectively. When analysis
design computer output, they should Identify the specific output that is needed to meet the
requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information

produced by the system. The output form of an information system should

accomplish one or more of the following objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

DATABASE DESIGN :

Registration :
4.2 DATA FLOW DIAGRAM :

The DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various processing
carried out on these data, and the output data is generated by the system.

ARCHITECTUREAL DIAGRAM
5.1 SYSTEM IMPLEMENTATION:

As Cloud Computing becomes prevalent, more and more sensitive information are
26
being centralized into the cloud, such as emails, personal health records, government
documents, etc. By storing their data into the cloud, the data owners can be relieved from the
burden of data storage and maintenance so as to enjoy the on-demand high quality data
storage service. However, the fact that data owners and cloud server are not in the same
trusted domain may put the outsourced data at risk, as the cloud server may no longer be fully
trusted. It follows that sensitive data usually should be encrypted prior to outsourcing for data
privacy and combating unsolicited accesses. However, data encryption makes effective data
utilization a very challenging task given that there could be a large amount of outsourced data
files. Moreover, in Cloud Computing, data owners may share their outsourced data with a
large number of users. The individual users might want to only retrieve certain specific data
files they are interested in during a given session. One of the most popular ways is to
selectively retrieve files through keyword-based search instead of retrieving all the encrypted
files back which is completely impractical in cloud computing scenarios. Such keyword-
based search technique allows users to selectively retrieve files of interest and has been
widely applied in plaintext search scenarios, such as Google search. Unfortunately, data
encryption restricts user’s ability to perform keyword search and thus makes the traditional
plaintext search methods unsuitable for Cloud Computing. Besides this, data encryption also
demands the protection of keyword privacy since keywords usually contain important
information related to the data files. Although encryption of keywords can protect keyword
privacy, it further renders the traditional plaintext search techniques useless in this scenario

5.2 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way to check
the functionality of components, sub assemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the Software system meets
its requirements and user expectations and does not fail in an unacceptable manner. There
are various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTING:

Unit testing

27
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each unique path of
a business process performs accurately to the documented specifications and contains
clearly defined inputs and expected results.

Integration testing

Integration tests are designed to test integrated software components to determine if


they actually run as one program. Testing is event driven and is more concerned with the
basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.

Functional test

Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Function : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems.

Organization and preparation of functional tests is focused on requirements, key


28 functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example
of system testing is the configuration oriented system integration test. System testing is
based on process descriptions and flows, emphasizing pre-driven process links and
integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its purpose. It is
purpose. It is used to test areas that cannot be reached from a black box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.

Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.

Test objectives
29  All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.

30
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
.
6.1 SOURCE CODE
HOME
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;

public partial class ShowDisk : System.Web.UI.Page


{
SqlConnection con = new
SqlConnection(ConfigurationManager.ConnectionStrings["SQLCONNECTIONSTRING"].C
onnectionString);
SqlCommand cmd;
private int nDirID = -1;
private int nParentID = -1;
protected void Page_Load(object sender, EventArgs e)
{
///obtain the value of parameter DirID
if(Request.Params["DirID"] != null)
{
if(Int32.TryParse(Request.Params["DirID"].ToString(),out nDirID) ==
false)
{
return;
33 }
}
if(Request.Params["ParentID"] != null)
{
if(Int32.TryParse(Request.Params["ParentID"].ToString(),out
nParentID) == false)
{
return;
}
}
if(!Page.IsPostBack)
{ ///display directory list info
BindDirectoryData();
if(nDirID > -1) ///DirID > -1
{
BindDirectoryData(nDirID);
SystemTools.SetListBoxItem(DirList,nDirID.ToString());
return;
}
if(nDirID <= -1 && nParentID > -1) ///DirID > -1
{
BindDirectoryData(nParentID);
SystemTools.SetListBoxItem(DirList,nParentID.ToString());
return;
}
if(DirList.Items.Count > 0)
{
BindDirectoryData(Int32.Parse(DirList.SelectedValue));
}
}
}
private void BindDirectoryData()
{ ///display directory list info
Disk disk = new Disk();
34
disk.ShowDirectory(DirList,-1);
if(DirList.Items.Count > 0)
{
DirList.SelectedIndex = 0;
}

disk.ShowDirectory(MoveDirList,-1);
}
private void BindDirectoryData(int nParentID)
{
///display directory list info
IDisk disk = new Disk();
SqlDataReader dr = disk.GetDirectoryFile(nParentID);
///bind the data of controls
DiskView.DataSource = dr;
DiskView.DataBind();
dr.Close();

ReturnBtn.Visible = nParentID > 0 ? true : false;


}
protected string FormatImageUrl(bool bFlag,string sType)
{
if(bFlag == true)
{ ///file type
return ("~/Images/folder.gif");
}
else
{
switch(sType)
{ ///bmp file
case "image/bmp": return ("~/Images/bmp.bmp");
///exe file
case "application/octet-stream": return ("~/Images/exe.bmp");
35
default: return("~/Images/other.gif");
}
}
return ("");
}
protected string FormatHerf(int nDirID,int nParentID,bool bFlag)
{
if(bFlag == true)
{
return ("Default.aspx?DirID=" + nDirID.ToString() + "&ParentID=" +
nParentID.ToString());
}
else
{
return ("ViewDisk.aspx?DirID=" + nDirID.ToString() + "&ParentID="
+ nParentID.ToString());
}
}
protected void DirList_SelectedIndexChanged(object sender,EventArgs e)
{ ///bind the data of controls
///
string s = Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/"+s+"/"))
{

BindDirectoryData(Int32.Parse(DirList.SelectedValue));
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
36
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser", DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
System.Windows.Forms.MessageBox.Show("Security Not Allowed", "Warning",
System.Windows.Forms.MessageBoxButtons.YesNoCancel,
System.Windows.Forms.MessageBoxIcon.Error);
}
}
protected void MoveBtn_Click(object sender,EventArgs e)
{
try
{ ///define a object
IDisk disk = new Disk();
foreach(GridViewRow row in DiskView.Rows)
{
CheckBox dirCheck =
(CheckBox)row.FindControl("DirCheck");
if(dirCheck != null)
{
if(dirCheck.Checked == true)
{
///fulfill database operation

disk.MoveDirectory(Int32.Parse(DiskView.DataKeys[row.RowIndex].Value.ToString()),

Int32.Parse(MoveDirList.SelectedValue));
}
}
}
///rebind the data of the controls
37 ///
string s= Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/" + s + "/"))
{
BindDirectoryData(Int32.Parse(DirList.SelectedValue));
Response.Write("<script>alert('" + "Modifying successfully,Safekeep your
data!" + "');</script>");
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser", DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
System.Windows.Forms.MessageBox.Show("Security Not Allowed", "Warning",
System.Windows.Forms.MessageBoxButtons.YesNoCancel,
System.Windows.Forms.MessageBoxIcon.Error);
}
}
catch(Exception ex)
{ ///jump to the exception handling page
Response.Redirect("ErrorPage.aspx?ErrorMsg=" +
ex.Message.Replace("<br>","").Replace("\n","")
+ "&ErrorUrl=" +
Request.Url.ToString().Replace("<br>","").Replace("\n",""));
}
}
protected void ReturnBtn_Click(object sender,EventArgs e)
38
{
///return to the parent directory
Response.Redirect("~/Default.aspx?DirID=" + nParentID.ToString());
}
protected void DiskView_RowCommand(object
sender,GridViewCommandEventArgs e)
{
if(e.CommandName == "delete")
{
try
{ ///delete data
IDisk disk = new Disk();

disk.DeleteFile(Int32.Parse(e.CommandArgument.ToString()));

///rebind the controls' data


BindDirectoryData(Int32.Parse(DirList.SelectedValue));
Response.Write("<script>alert('" + "Deleting successfully,Safe your data!" +
"');</script>");
}
catch(Exception ex)
{ ///jump to the page dealing with exception handling
Response.Redirect("ErrorPage.aspx?ErrorMsg=" +
ex.Message.Replace("<br>","").Replace("\n","")
+ "&ErrorUrl=" +
Request.Url.ToString().Replace("<br>","").Replace("\n",""));
}
}
}
protected void DiskView_RowDeleting(object sender,GridViewDeleteEventArgs e)
{

}
protected void DiskView_RowDataBound(object sender,GridViewRowEventArgs e)
39
{
ImageButton deleteBtn = (ImageButton)e.Row.FindControl("DeleteBtn");
if(deleteBtn != null)
{
deleteBtn.Attributes.Add("onclick","return confirm('Are you sure to
delete all the selected items?');");
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Response.Redirect("CloudLlogin.aspx");
}
protected void DiskView_SelectedIndexChanged(object sender, EventArgs e)
{

}
}
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;

public partial class EditFile : System.Web.UI.Page


{
int nFileID = -1;
protected void Page_Load(object sender,EventArgs e)
{
40
///obtain the value of parameter DirID
if(Request.Params["DirID"] != null)
{
if(Int32.TryParse(Request.Params["DirID"].ToString(),out nFileID)
== false)
{
return;
}
}
if(!Page.IsPostBack)
{ ///display the name of the directory
if(nFileID > -1)
{
BindFileData(nFileID);
}
}
}
private void BindFileData(int nDirID)
{
string sFileName = "";
IDisk disk = new Disk();
SqlDataReader dr = disk.GetSingleFile(nFileID);
if(dr.Read())
{ ///obtain the file name(including the extension)
sFileName = dr["Name"].ToString();
}
dr.Close();
///searching for the last '.'
int dotIndex = sFileName.LastIndexOf(".");
if(dotIndex > -1)
{ ///obtain the file name(excluding the extension)
Name.Text = sFileName.Substring(0,dotIndex);
}
}
41
protected void EditBtn_Click(object sender,EventArgs e)
{
try
{ ///define the object
IDisk disk = new Disk();
///fulfill the database operation
disk.EditFile(nFileID,Name.Text.Trim());
Response.Write("<script>alert('" + "Modifying successfully,safekeep
your data!" + "');</script>");
}
catch(Exception ex)
{ ///redirect to the exception handling page
Response.Redirect("ErrorPage.aspx?ErrorMsg=" +
ex.Message.Replace("<br>","").Replace("\n","")
+ "&ErrorUrl=" +
Request.Url.ToString().Replace("<br>","").Replace("\n",""));
}
}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
///return
Response.Redirect("~/Default.aspx");
}
}

LOGIN
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
42
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Windows.Forms;

public partial class Login : System.Web.UI.Page


{
WebService ws = new WebService();

protected void Page_Load(object sender, EventArgs e)


{

protected void btnlogin_Click(object sender, ImageClickEventArgs e)


{

string[] log = new string[2];


int i = 0;
log[i++] = txtusername.Text;
log[i++] = txtpassword.Text;
string chk = ws.userlogin(log);
if (chk != "Invalid User Name And Password")
{
Session["uname"]= txtusername.Text;
Session["psd"] = txtpassword.Text;
Response.Redirect("CloudLlogin.aspx");

}
else

MessageBox.Show("Invalid Username and Password");


}
}
43
SEARCH
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;

public partial class SearchFile : System.Web.UI.Page


{
SqlConnection con = new
SqlConnection(ConfigurationManager.ConnectionStrings["SQLCONNECTIONSTRING"].C
onnectionString);
SqlCommand cmd;
protected void Page_Load(object sender,EventArgs e)
{
if(!Page.IsPostBack)
{ ///display directory list info
BindDirectoryData();
}
}
private void BindDirectoryData()
{ ///display directory list info
Disk disk = new Disk();
disk.ShowDirectory(DirList,-1);
}
protected string FormatImageUrl(bool bFlag,string sType)
{
if(bFlag == true)
44
{ ///file type
return ("~/Images/folder.gif");
}
else
{
switch(sType)
{ ///bmp file
case "image/bmp": return ("~/Images/bmp.bmp");
///exe file
case "application/octet-stream": return ("~/Images/exe.bmp");
default: return ("~/Images/other.gif");
}
}
return ("");
}
protected string FormatHerf(int nDirID,int nParentID,bool bFlag)
{
if(bFlag == true)
{
return ("Default.aspx?DirID=" + nDirID.ToString() + "&ParentID=" +
nParentID.ToString());
}
else
{
return ("ViewDisk.aspx?DirID=" + nDirID.ToString() + "&ParentID="
+ nParentID.ToString());
}
}
protected void SearchBtn_Click(object sender,EventArgs e)
{
string s = Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/" + s + "/"))
{
///define the object
45
IDisk disk = new Disk();
///execute the database operation
SqlDataReader dr = disk.SearchFiles(Name.Text.Trim());
FileView.DataSource = dr;
FileView.DataBind();
dr.Close();
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser", DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();

System.Windows.Forms.MessageBox.Show("Security Not Allowed", "Warning",


System.Windows.Forms.MessageBoxButtons.YesNoCancel,
System.Windows.Forms.MessageBoxIcon.Error);
}

}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
///return
Response.Redirect("~/Default.aspx");
}
}

46
VIEW
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;
using System.Text;

public partial class ViewDisk : System.Web.UI.Page


{
int nFileID = -1;
private int nParentID = -1;
protected void Page_Load(object sender,EventArgs e)
{

///obtain the value of parameter DirID


if(Request.Params["DirID"] != null)
{
if(Int32.TryParse(Request.Params["DirID"].ToString(),out nFileID)
== false)
{
return;
}
}
if(Request.Params["ParentID"] != null)
47
{
if(Int32.TryParse(Request.Params["ParentID"].ToString(),out
nParentID) == false)
{
return;
}
}
if(!Page.IsPostBack)
{ ///Display the name of the directory
if(nFileID > -1)
{
BindFileData(nFileID);
}
}
}
private void BindFileData(int nDirID)
{
IDisk disk = new Disk();
SqlDataReader dr = disk.GetSingleFile(nFileID);
if(dr.Read())
{ ///Obtain the file name(including the extension)
Name.Text = dr["Name"].ToString();
Type.Text = dr["Type"].ToString();
Contain.Text = dr["Contain"].ToString() + "B";
CreateDate.Text = dr["CreateDate"].ToString();

///Create the dir where the file lies


Dir.Text = CreateDir(Int32.Parse(dr["DirID"].ToString()));
}
dr.Close();

}
public string CreateDir(int nDirID)
{
48
StringBuilder dirSB = new StringBuilder();
IDisk disk = new Disk();
DataTable dataTable =
SystemTools.ConvertDataReaderToDataTable(disk.GetAllDirectoryFile());

DataRow[] rowList = dataTable.Select("DirID='" + nDirID.ToString() + "'");


if(rowList.Length != 1) return("");
///Create other dirs
InsertParentDir(dataTable,Int32.Parse(rowList[0]
["ParentID"].ToString()),dirSB);
return (dirSB.ToString());
}

private void InsertParentDir(DataTable dataTable,int nParentID,StringBuilder sDir)


{
if(nParentID <= -1)
{
return;
}
DataRow[] rowList = dataTable.Select("DirID='" + nParentID.ToString() +
"'");
if(rowList.Length != 1) return;
sDir.Insert(0,rowList[0]["Name"].ToString() + "/");
InsertParentDir(dataTable,Int32.Parse(rowList[0]
["ParentID"].ToString()),sDir);
}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
Response.Redirect("~/Default.aspx?ParentID=" + nParentID.ToString());
}
protected void Button1_Click(object sender, EventArgs e)
{
Image1.ImageUrl = "WebDisk/" + Name.Text;
}
49
}
ADD
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Windows.Forms;
using System.Data.SqlClient;
public partial class AddFolder : System.Web.UI.Page
{
//3_4_2011
SqlConnection con = new
SqlConnection(ConfigurationManager.ConnectionStrings["SQLCONNECTIONSTRING"].C
onnectionString);
SqlCommand cmd;
protected void Page_Load(object sender, EventArgs e)
{
if(!Page.IsPostBack)
{ ///display directoy info
BindDirectoryData();
}
}
private void BindDirectoryData()
{ ///display directory info
Disk disk = new Disk();
disk.ShowDirectory(DirList,-1);
}

50
protected void AddBtn_Click(object sender,EventArgs e)
{
string s = Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/" + s + "/"))
{

try
{ ///define a Disk object
IDisk disk = new Disk();
///execute the database operation
disk.AddDirectory(Name.Text.Trim(), Int32.Parse(DirList.SelectedValue));
Response.Write("<script>alert('" + "You have succeded in adding the info,safekeep
you data!" + "');</script>");
}
catch (Exception ex)
{ ///jump to the exception handling page
Response.Redirect("ErrorPage.aspx?ErrorMsg=" + ex.Message.Replace("<br>",
"").Replace("\n", "")
+ "&ErrorUrl=" + Request.Url.ToString().Replace("<br>", "").Replace("\n",
""));
}
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser",DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
51
MessageBox.Show("Security Not Allowed", "Warning",
MessageBoxButtons.YesNoCancel, MessageBoxIcon.Error);
}

}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
///return
Response.Redirect("~/Default.aspx");
}
}

6.2 SCREEN SHOTS


ADMIN

Home :

52
Admin Login :

53
Registration:
Login:

50
Data Stored from cloud:
Cloud Registration:
Generate to Homomorphic key:
Find out the data:
Cloud User:
Cloud Database:
Create the Folder from Data storage:
Logout the Database:
Upload Files:

60
Admin login:

61
Download form the data:

62
63
7. CONCLUSION

1. Third party auditing technique with anonymous authentication, which


provides user revocation and prevents replay attacks, is achieved.
2. The cloud does not know the identity of the user who stores information,
but only verifies the user’s credentials.
3. Key distribution is done in a decentralized way and also hides the
attributes and access policy of a user.
4. One limitation is that the cloud knows the access policy for each record
stored in the cloud.
5. In future, using SQL queries for hide the attributes and access policy of a
user.

6. Files stored in cloud can be corrupted. So for this issue using the file
recovery technique to recover the corrupted file successfully and to hide the access
policy and the user attributes.

64
65
8. FUTURE ENHANCEMENT

In this paper, we investigate the problem of data security in cloud data storage,
which is essentially a distributed storage system. To achieve the assurances of cloud data
integrity and availability and enforce the quality of dependable cloud storage service for
users, we propose an effective and flexible distributed scheme with explicit dynamic data
support, including block update, delete, and append. We rely on erasure-correcting code in
the file distribution preparation to provide redundancy parity vectors and guarantee the data
dependability. Considering the time, computation resources, and even the related online
burden of users, we also provide the extension of the proposed main scheme to support
third-party auditing, where users can safely delegate the integrity checking tasks to third-
party auditors and be worry-free to use the cloud storage services. We further extend our
result to enable the TPA to perform audits for multiple users simultaneously and efficiently.
In location-based services, users with location-aware mobile devices are able to make
queries about their surroundings anywhere and at any time. While this ubiquitous
computing paradigm brings great convenience for information access, it also raises
concerns over potential intrusion into user location privacy. To protect location privacy, one
typical approach is to cloak user locations into spatial regions based on user-specified
privacy requirements, and to transform location-based queries into region-based queries. In
this paper, we identify and address three new issues concerning this location cloaking
approach. First, we study the representation of cloaking regions and show that a circular
region generally leads to a small result size for region based queries. Second, we develop a
mobility-aware location cloaking technique to resist trace analysis attacks. Two cloaking
algorithms, namely Malacca Cloak and Mincom Cloak, are designed based on different
performance objectives. Finally, we develop an efficient polynomial algorithm for
evaluating circular region- based ken queries. Two query processing modes, namely bulk
and progressive, are presented to return query results either all at once or in an incremental
manner. Experimental results show that our proposed mobility-aware cloaking algorithms
significantly improve the quality of location cloaking in terms of an entropy measure
without compromising much on query latency or communication cost. Moreover, the
progressive query processing mode achieves a shorter response time than the bulk mode by
parallelizing the query evaluation and result transmission.

66
67
9. BIBLIOGRAPHY

Good Teachers are worth more than thousand books, we have them in Our
Department.

References Made From:

1. User Interfaces in C#: Windows Forms and Custom Controls by Matthew


MacDonald.
2. Data Communications and Networking, by Behrouz A Frozen.
3. Operating System Concepts, by Abraham Silberrrschatz
4. Amazon Web Services(AWS), Online at http://aws.amazon.com
5. E-J-Go, “Secure indexes,” Cryptology
reprint

Sites Referred

 http://www.sourcefordgde.com

 http://www.networkcomputing.com/
 http://www.ieee.org
 http://www.almaden.ibm.com/software/quest/Resources/
 http://www.computer.org/publications/dlib

68

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy