Goms
Goms
1. INTRODUCTION
1.1 Abstract
1.2 Project overview and module description
2. SYSTEM SPECIFICATION
2.1 Software Specification
2.2 hardware specification
2.3 software description
3. SYSTEM ANALYSIS
3.2 Existing System
3.3 Proposed System
4. SYSTEM DESIGN
4.1 system design
4.2 dataflow diagram
5. SYSTEM IMPLEMENTATION&TESTING
5.1 System Implementation
5.2 System testing
6. APPENDIX
6.1 Source code
6.2 Screenshots
7. FUTURE ENHANCEMENTS
8. CONCLUSION
9. BIBLIOGRAPHY
1
2
THIRD PARTY AUDITING SCHEME FOR CLOUD
STORAGE
1.1 ABSTRACT
To propose a third party auditing scheme for secure data storage in clouds, that
supports anonymous authentication. In the proposed scheme, the cloud verifies the
authenticity of the user without knowing the user’s identity before storing data. Our scheme
also has the added feature of access control in which only valid users are able to decrypt the
stored information. The scheme prevents replay attacks and supports creation, modification,
and reading data stored in the cloud. We also address user revocation. Moreover, our
authentication and access control scheme is decentralized and robust, unlike other access
control schemes designed for clouds which are centralized. The communication,
computation, and storage overheads are comparable to centralized approaches.
1.2 PROJECT OVERVIEW AND MODULE DESCRIPTION:
As Cloud Computing becomes prevalent, more and more sensitive information are
being centralized into the cloud, such as emails, personal health records, government
documents, etc. By storing their data into the cloud, the data owners can be relieved from
the burden of data storage and maintenance so as to enjoy the on-demand high quality data
storage service. However, the fact that data owners and cloud server are not in the same
trusted domain may put the outsourced data at risk, as the cloud server may no longer be
fully trusted. It follows that sensitive data usually should be encrypted prior to outsourcing
for data privacy and combating unsolicited accesses. However, data encryption makes
effective data utilization a very challenging task given that there could be a large amount of
outsourced data files. Moreover, in Cloud Computing, data owners may share their
outsourced data with a large number of users. The individual users might want to only
retrieve certain specific data files they are interested in during a given session. One of the
most popular ways is to selectively retrieve files through keyword-based search instead of
retrieving all the encrypted files back which is completely impractical in cloud computing
scenarios. Such keyword-based search technique allows users to selectively retrieve files of
interest and has been widely applied in plaintext search scenarios, such as Google search.
Unfortunately, data encryption restricts user’s ability to perform keyword search and thus
makes the traditional plaintext search methods unsuitable for Cloud Computing. Although
encryption of keywords can protect keyword privacy, it further renders the traditional
plaintext search techniques useless in this scenario.
4
In this paper, the mainstay of this is to propose a new decentralized access control
scheme for secure data storage in clouds that supports anonymous authentication. The
proposed scheme is resilient to replay attacks. A writer whose attributes and keys have been
revoked cannot write back stale information. The cloud verifies the authenticity of the user
without knowing the user’s identity before storing data. The scheme also has the added
feature of access control in which only valid users are able to decrypt the stored information.
The scheme prevents replay attacks and supports creation, modification, and reading data
stored in the cloud
MODULE DESCRIPTION:
ENCRYPTION
The information about creator are stored in image using steganography.
The goal of steganography is to hide messages in such a way that no one apart from the
intended recipient even knows that a message has been sent.
FILE UPLOADING
The encrypted information about the creator can be uploaded.
Creator can upload the documents in any type such as jpg, audio, video, zip, docx, etc.
Creator can view, edit and download the uploaded documents if required.
DECRYPTION
If the trustee wants to know about the creator personal information, trustee can decrypt
the information after getting the image file and password from the creator.
Trustee get the decryption key from the creator, after getting the key trustee can decrypt
the creator’s information.
DOWNLOADING
Mouse : Logitech.
Ram : 512M
2.3 SOFTWARE DESCRIPTIONS
Features of .Net
Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily
and securely interoperate. There’s no language barrier with .NET: there are numerous
languages available to the developer including Managed C++, C#, Visual Basic and Java
Script. The .NET framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate.
“.NET” is also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).
The CLR is described as the “execution engine” of .NET. It provides the environment
within which programs run. The most important features are
Managed Code
The code that targets .NET, and which contains certain extra Information
– metadata” - to describe itself. Whilst both managed and unmanaged code can
run in the runtime, only managed code contains the information that allows the
CLR to guarantee, for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and
Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do
not. Targeting CLR can, depending on the language you’re using, impose certain constraints
on the features available. As with managed and unmanaged code, one can have both
managed and unmanaged data in .NET applications - data that doesn’t get garbage collected
but instead is looked after by unmanaged code.
The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there
are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic also now
supports structured exception handling, custom attributes and also supports multi-
threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of
migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is just the
11
grammar of the language. It has no standard library of its own, and instead has been
designed with the intention of using the .NET libraries as its own
FORTRAN
COBOL
Eiffel
Fig1 .Net Framework
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define
multiple procedures with the same name, where each procedure has a different set of
arguments. Besides using overloading for procedures, we can use it for constructors
and properties in a class.
MULTITHREADING:
Features of SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL
Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta
Data Services. The term repository is used only in reference to the repository engine
within Meta Data Services
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
1. Design View
15
Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We can
specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view
mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers
the question from one or more table. The data that make up the answer is either dynast (if
you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest
information in the dynast. Access either displays the dynast or snapshot for us to view or
perform an action on it, such as deleting or updating.
3.1 EXISTING SYSTEM:
➢ Single KDC is not only a single point of failure but difficult to maintain
because of the large number of users that are supported in a cloud
environment
➢ Only one user can create and store a file and other users can only read the file.
➢ Write access was not permitted to users other than the creator.
➢ Third party auditing scheme for secure data storage in clouds, that supports
anonymous authentication.
➢ The architecture is decentralized, meaning that there can be several KDCs for key
management
➢ Writer have to get permission from the creator to alter the stored data.
INPUT DESIGN
The input design is the link between the information system and the user. It comprises
the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it can occur by
having people keying the data directly into the system. The design of input focuses on
controlling the amount of input required, controlling the errors, avoiding delay, avoiding
extra steps and keeping the process simple. The input is designed in such a way so that it
provides security and ease of use with retaining the privacy. Input Design considered the
following things:
OBJECTIVES
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free from
errors. The data entry screen is designed in such a way that all the data manipulates can be
performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the
help of screens.
Appropriate messages are provided as when needed so that the user
will not be in maize of instant. Thus the objective of input design is to create an input
layout that is easy to follow.
OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users
and to other system through outputs. In output design it is determined how the information
is to be displaced for immediate need and also the hard copy output. It is the most important
and direct source information to the user. Efficient and intelligent output design improves
the system’s relationship to help user decision-making.
DATABASE DESIGN :
Registration :
4.2 DATA FLOW DIAGRAM :
The DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various processing
carried out on these data, and the output data is generated by the system.
ARCHITECTUREAL DIAGRAM
5.1 SYSTEM IMPLEMENTATION:
As Cloud Computing becomes prevalent, more and more sensitive information are
26
being centralized into the cloud, such as emails, personal health records, government
documents, etc. By storing their data into the cloud, the data owners can be relieved from the
burden of data storage and maintenance so as to enjoy the on-demand high quality data
storage service. However, the fact that data owners and cloud server are not in the same
trusted domain may put the outsourced data at risk, as the cloud server may no longer be fully
trusted. It follows that sensitive data usually should be encrypted prior to outsourcing for data
privacy and combating unsolicited accesses. However, data encryption makes effective data
utilization a very challenging task given that there could be a large amount of outsourced data
files. Moreover, in Cloud Computing, data owners may share their outsourced data with a
large number of users. The individual users might want to only retrieve certain specific data
files they are interested in during a given session. One of the most popular ways is to
selectively retrieve files through keyword-based search instead of retrieving all the encrypted
files back which is completely impractical in cloud computing scenarios. Such keyword-
based search technique allows users to selectively retrieve files of interest and has been
widely applied in plaintext search scenarios, such as Google search. Unfortunately, data
encryption restricts user’s ability to perform keyword search and thus makes the traditional
plaintext search methods unsuitable for Cloud Computing. Besides this, data encryption also
demands the protection of keyword privacy since keywords usually contain important
information related to the data files. Although encryption of keywords can protect keyword
privacy, it further renders the traditional plaintext search techniques useless in this scenario
TYPES OF TESTING:
Unit testing
27
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each unique path of
a business process performs accurately to the documented specifications and contains
clearly defined inputs and expected results.
Integration testing
Functional test
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example
of system testing is the configuration oriented system integration test. System testing is
based on process descriptions and flows, emphasizing pre-driven process links and
integration points.
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.
Test objectives
29 All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.
30
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
.
6.1 SOURCE CODE
HOME
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;
disk.ShowDirectory(MoveDirList,-1);
}
private void BindDirectoryData(int nParentID)
{
///display directory list info
IDisk disk = new Disk();
SqlDataReader dr = disk.GetDirectoryFile(nParentID);
///bind the data of controls
DiskView.DataSource = dr;
DiskView.DataBind();
dr.Close();
BindDirectoryData(Int32.Parse(DirList.SelectedValue));
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
36
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser", DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
System.Windows.Forms.MessageBox.Show("Security Not Allowed", "Warning",
System.Windows.Forms.MessageBoxButtons.YesNoCancel,
System.Windows.Forms.MessageBoxIcon.Error);
}
}
protected void MoveBtn_Click(object sender,EventArgs e)
{
try
{ ///define a object
IDisk disk = new Disk();
foreach(GridViewRow row in DiskView.Rows)
{
CheckBox dirCheck =
(CheckBox)row.FindControl("DirCheck");
if(dirCheck != null)
{
if(dirCheck.Checked == true)
{
///fulfill database operation
disk.MoveDirectory(Int32.Parse(DiskView.DataKeys[row.RowIndex].Value.ToString()),
Int32.Parse(MoveDirList.SelectedValue));
}
}
}
///rebind the data of the controls
37 ///
string s= Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/" + s + "/"))
{
BindDirectoryData(Int32.Parse(DirList.SelectedValue));
Response.Write("<script>alert('" + "Modifying successfully,Safekeep your
data!" + "');</script>");
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser", DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
System.Windows.Forms.MessageBox.Show("Security Not Allowed", "Warning",
System.Windows.Forms.MessageBoxButtons.YesNoCancel,
System.Windows.Forms.MessageBoxIcon.Error);
}
}
catch(Exception ex)
{ ///jump to the exception handling page
Response.Redirect("ErrorPage.aspx?ErrorMsg=" +
ex.Message.Replace("<br>","").Replace("\n","")
+ "&ErrorUrl=" +
Request.Url.ToString().Replace("<br>","").Replace("\n",""));
}
}
protected void ReturnBtn_Click(object sender,EventArgs e)
38
{
///return to the parent directory
Response.Redirect("~/Default.aspx?DirID=" + nParentID.ToString());
}
protected void DiskView_RowCommand(object
sender,GridViewCommandEventArgs e)
{
if(e.CommandName == "delete")
{
try
{ ///delete data
IDisk disk = new Disk();
disk.DeleteFile(Int32.Parse(e.CommandArgument.ToString()));
}
protected void DiskView_RowDataBound(object sender,GridViewRowEventArgs e)
39
{
ImageButton deleteBtn = (ImageButton)e.Row.FindControl("DeleteBtn");
if(deleteBtn != null)
{
deleteBtn.Attributes.Add("onclick","return confirm('Are you sure to
delete all the selected items?');");
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Response.Redirect("CloudLlogin.aspx");
}
protected void DiskView_SelectedIndexChanged(object sender, EventArgs e)
{
}
}
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;
LOGIN
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
42
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Windows.Forms;
}
else
}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
///return
Response.Redirect("~/Default.aspx");
}
}
46
VIEW
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;
using System.Text;
}
public string CreateDir(int nDirID)
{
48
StringBuilder dirSB = new StringBuilder();
IDisk disk = new Disk();
DataTable dataTable =
SystemTools.ConvertDataReaderToDataTable(disk.GetAllDirectoryFile());
50
protected void AddBtn_Click(object sender,EventArgs e)
{
string s = Disk.user;
if (DirList.SelectedItem.Text.StartsWith("/" + s + "/"))
{
try
{ ///define a Disk object
IDisk disk = new Disk();
///execute the database operation
disk.AddDirectory(Name.Text.Trim(), Int32.Parse(DirList.SelectedValue));
Response.Write("<script>alert('" + "You have succeded in adding the info,safekeep
you data!" + "');</script>");
}
catch (Exception ex)
{ ///jump to the exception handling page
Response.Redirect("ErrorPage.aspx?ErrorMsg=" + ex.Message.Replace("<br>",
"").Replace("\n", "")
+ "&ErrorUrl=" + Request.Url.ToString().Replace("<br>", "").Replace("\n",
""));
}
}
else
{
con.Open();
string ins = "insert into thirdparty
Values(@CreditUser,@NonCreditUser,@_DateTime)";
cmd = new SqlCommand(ins, con);
cmd.Parameters.AddWithValue("@CreditUser", s);
cmd.Parameters.AddWithValue("@NonCreditUser",DirList.SelectedItem.Text);
cmd.Parameters.AddWithValue("@_DateTime", DateTime.Now.ToString());
cmd.ExecuteNonQuery();
cmd.Dispose();
con.Close();
51
MessageBox.Show("Security Not Allowed", "Warning",
MessageBoxButtons.YesNoCancel, MessageBoxIcon.Error);
}
}
protected void ReturnBtn_Click(object sender,EventArgs e)
{
///return
Response.Redirect("~/Default.aspx");
}
}
Home :
52
Admin Login :
53
Registration:
Login:
50
Data Stored from cloud:
Cloud Registration:
Generate to Homomorphic key:
Find out the data:
Cloud User:
Cloud Database:
Create the Folder from Data storage:
Logout the Database:
Upload Files:
60
Admin login:
61
Download form the data:
62
63
7. CONCLUSION
6. Files stored in cloud can be corrupted. So for this issue using the file
recovery technique to recover the corrupted file successfully and to hide the access
policy and the user attributes.
64
65
8. FUTURE ENHANCEMENT
In this paper, we investigate the problem of data security in cloud data storage,
which is essentially a distributed storage system. To achieve the assurances of cloud data
integrity and availability and enforce the quality of dependable cloud storage service for
users, we propose an effective and flexible distributed scheme with explicit dynamic data
support, including block update, delete, and append. We rely on erasure-correcting code in
the file distribution preparation to provide redundancy parity vectors and guarantee the data
dependability. Considering the time, computation resources, and even the related online
burden of users, we also provide the extension of the proposed main scheme to support
third-party auditing, where users can safely delegate the integrity checking tasks to third-
party auditors and be worry-free to use the cloud storage services. We further extend our
result to enable the TPA to perform audits for multiple users simultaneously and efficiently.
In location-based services, users with location-aware mobile devices are able to make
queries about their surroundings anywhere and at any time. While this ubiquitous
computing paradigm brings great convenience for information access, it also raises
concerns over potential intrusion into user location privacy. To protect location privacy, one
typical approach is to cloak user locations into spatial regions based on user-specified
privacy requirements, and to transform location-based queries into region-based queries. In
this paper, we identify and address three new issues concerning this location cloaking
approach. First, we study the representation of cloaking regions and show that a circular
region generally leads to a small result size for region based queries. Second, we develop a
mobility-aware location cloaking technique to resist trace analysis attacks. Two cloaking
algorithms, namely Malacca Cloak and Mincom Cloak, are designed based on different
performance objectives. Finally, we develop an efficient polynomial algorithm for
evaluating circular region- based ken queries. Two query processing modes, namely bulk
and progressive, are presented to return query results either all at once or in an incremental
manner. Experimental results show that our proposed mobility-aware cloaking algorithms
significantly improve the quality of location cloaking in terms of an entropy measure
without compromising much on query latency or communication cost. Moreover, the
progressive query processing mode achieves a shorter response time than the bulk mode by
parallelizing the query evaluation and result transmission.
66
67
9. BIBLIOGRAPHY
Good Teachers are worth more than thousand books, we have them in Our
Department.
Sites Referred
http://www.sourcefordgde.com
http://www.networkcomputing.com/
http://www.ieee.org
http://www.almaden.ibm.com/software/quest/Resources/
http://www.computer.org/publications/dlib
68