Mobile Banking Report
Mobile Banking Report
ABSTRACT:
"Mobile Banking refers to provision and availment of banking- and financial services
with the help of mobile telecommunication devices.The scope of offered services may
include facilities to conduct bank and stock market transactions, to administer accounts
and to access customised information."
According to this model Mobile Banking can be said to consist of three inter-related
concepts:
• Mobile Accounting
• Mobile Brokerage
• Mobile Financial Information Services
Most services in the categories designated Accounting and Brokerage are transaction-
based. The non-transaction-based services of an informational nature are however
essential for conducting transactions - for instance, balance enquiries might be needed
before committing a money remittance. The accounting and brokerage services are
therefore offered invariably in combination with information services. Information
services, on the other hand, may be offered as an independent module.
Project Overall Description:
Many believe that mobile users have just started to fully utilize the data capabilities in
their mobile phones. In Asian countries like India, China, Bangladesh, Indonesia and
Philippines, where mobile infrastructure is comparatively better than the fixed-line
infrastructure, and in European countries, where mobile phone penetration is very high
(at least 80% of consumers use a mobile phone), mobile banking is likely to appeal even
more.
Mobile devices, especially Smart phone are the most promising way to reach the masses
and to create “stickiness” among current customers, due to their ability to provide
services anytime, anywhere, high rate of penetration and potential to grow. According to
Gartner, shipment of smartphones is growing fast, and should top 20 million units (of
over 800 million sold) in 2006 alone.
In the last 4 years, banks across the globe have invested billions of dollars to build
sophisticated internet banking capabilities. As the trend is shifting to mobile banking,
there is a challenge for CIOs and CTOs of these banks to decide on how to leverage their
investment in internet banking and offer mobile banking, in the shortest possible time.
Mobile Banking is a web based application which is developed to serve the people for
their money transferring purpose and in order to relive the customers workload in their
busy lives. It helps in transferring money in time and in a hassle free manner which also
ensures reliability that the money is securely transferred to the receiving authority.
Now a days money transferring includes a lot of manual work and a hefty job. It is a
difficult task for the people in their busy lives. The customers are forced to wait in queue
in the bank for this transfer process and to fill in the details and it is unavoidable. Though
these can be done at multiple counters at different locations working people and business
people find it more difficult of this unavoidable inconvenience.
So as to overcome these difficulties Mobile Banking is developed. It ensures transfer of
money from the senders account to the receivers account provided that the users have
supplied proper account number and secret code and the receivers account number. The
end user can transfer and check the information about every transaction about their
transfers and withdrawals from the internet itself. The added advantage of this application
is, it ensures checkpoint reliability at every step even if there is power shut downs or
system crashes. Since at any moment the transaction details are maintained the transfer
process is ensured.
The customer has also an option of checking all the previous transactions, whether the
transaction process is success, the date and exact time of the transaction, and the number
of transactions performed in a particular date and etc.
Existing System:
Over the last few years, the mobile and wireless market has been one of the fastest
growing markets in the world and it is still growing at a rapid pace. According to the
GSM Association and Ovum, the number of mobile subscribers exceeded 2 billion in
September 2005, and now exceeds 2.5 billion (of which more than 2 billion are GSM). In
the last 4 years, banks across the globe have invested billions of dollars to build
sophisticated internet banking capabilities. As the trend is shifting to mobile banking,
there is a challenge for CIOs and CTOs of these banks to decide on how to leverage their
investment in internet banking and offer mobile banking, in the shortest possible time.
Proposed System:
With mobile banking, the customer may be sitting in any part of the world (true anytime,
anywhere banking) and hence banks need to ensure that the systems are up and running
in a true 24 x 7 fashion. As customers will find mobile banking more and more useful,
their expectations from the solution will increase. Banks unable to meet the performance
and reliability expectations may lose customer confidence.There as systems such as
Mobile Transaction Platform which enable quick and secure mobile enabling of various
banking service. Recently in India there has been a phenominal growth in the use of
Mobile Banking applications with leading banks adopting Mobile Transaction platform
and the Central Bank (RBI )publishing guidelines for mobile banking operations.
This system is developed with an objective to automate the online money transfer process
in a hassle free manner and with a complete reliability. Its main aim is to help every
customer to transfer their money with confidence and to ensure reliability that the amount
has been transferred successfully.
The system is very secure and prevents the account number and secret code theft of the
customers who are transferring their money online.
Software Requirements Specification:
Hardware Interfaces
Processor Type : Pentium -IV
Speed : 2.4 GHZ
Ram : 256 MB RAM
Hard disk : 20 GB HD
Software Interfaces
1. Open the database folder and copy the log and the mdf file in any one of the local
drives
2. Go to enterprise manager and right click on the database and select attach
database
3. Browse the mdf file from the local drives and click ok
4. The database will be attached successfully.
Open Microsoft visual studio then set the default page as the start page and run the
project.
Screen Shots:
Home Page:
Transaction Page ( Entering A/c number and secret code)
Mini Statement
Finance Process Main Page:
Finance Process Analyze the Customer query Page:
Finance Process Analyze the Customer query result Page:
User access info page :
Check Book Rquest process page:
Table Design
TABLE
Table Name: tblsavmain
accountid nvarchar 50
pwd nvarchar 50
balance int 4
accountid nvarchar 50
pwd nvarchar 50
balance numeric 4
Table Name: tblsavtran
TRANSFER MODULE:
currentacc or
savingsacc
Account Number
CORRECT
Verifies Account
USER number and ACCOUNT TYPE
code
Secret Code
CORRECT
Verifies
Transfer
Number
CORRECT
TRANSFER COMPLETION
ACCOUNT STATUS MODULE:
currentacc or savingsacc
Account Number
CORRECT
Verifies Account
USER number and code View user
statements
Secret Code
LIST OF MODULES:
1) Mobile Application Development.
2) Generate IIS.
3) WML Creation.
4) Transfers.
5) Accounts status module.
6) Finance Enquiry module.
7) Check book request.
8) Access User details.
Module Description:
IIS Generation:
IIS (Internet Information Server) is a group of Internet servers (including a Web or
Hypertext Transfer Protocol server and a File Transfer Protocol server) with additional
capabilities for Microsoft's Windows NT and Windows 2000 Server operating systems.
IIS can create pages for Web sites using Microsoft's Front Page product (with its
WYSIWYG user interface). Web developers can use Microsoft's Active Server Page
(ASP)technology, which means that applications - including ActiveX controls - can be
imbedded in Web pages that modify the content sent back to users. Developers can also
write programs that filter requests and get the correct Web pages for different users by
using Microsoft's Internet Server Application Program Interface (ISAPI) interface. By
using this IIS we generate the mobile application in the local host.
WML:
WML pages are often called "decks". A deck contains a set of cards. A card element can
contain text, markup, links, input-fields, tasks, images and more. Cards can be related to
each other with links.
When a WML page is accessed from a mobile phone, all the cards in the page are
downloaded from the WAP server. Navigation between the cards is done by the phone
computer - inside the phone - without any extra access trips to the server:
In our project we using the MICROSOFT ASYNC 4.0 for the wml conversion.
Transfer
The customer can transfer the money to another account in on line from their savings
account or from their current account. The transaction is done by full authentication and
in full security. If the users have the low balance they not allow transferring the money.
Mini statement:
In this module the customer can view their transfer details from their savings account or
from their current account through date wise. Here we provide the security by using
their account id and password.
Finance Finance Enquiry:
In This module the customer can process the on line Enquiry for the different loan details
such as car loan, two-wheeler loan, education loan ,home loan and then calculate the EMI
and the number of months for the corresponding finance process Enquiry.
Check book request:
In this module the user can apply the request for the check book depend their accounts.
Here the hackers cannot apply for the check book request.
Access User details:
the user can accsess the details about the user maintain the low balance in the bank . and
also they can access the user who maintain the high balance .
HARDWARE INTERFACE:
Hardware includes any physical device that is connected to the computer and it is
controlled by the computer’s microprocessor. This includes equipment that was
connected to the computer when it was manufactured, as well as peripheral equipment
that added later. Some examples of devices are modems, disk drives, printers and
keyboards etc.
Hardware interfaces are the plugs, sockets, wires, and the electrical pulses traveling
through them in a particular pattern.
Every interface implies a function. At the hardware level, electronic signals activate
functions, data’s are read, written, transmitted, serviced, analyzed for error etc.
SOFTWARE DEVELOPMENT
The following are the software’s used in our project. We have used ASP.Net with C# as
front end and SQL Server as backend.
NET (dot-net) is the name Microsoft gives to its general vision of the future of
computing, the view being of a world in which many applications run in a distributed
manner across the Internet. We can identify a number of different motivations driving
this vision.
At the development end of the .NET vision is the .NET Framework. This contains
the Common Language Runtime, the .NET Framework Classes, and higher-level features
like ASP.NET (the next generation of Active Server Pages technologies) and Win Forms
(for developing desktop applications).
The Common Language Runtime (CLR) manages the execution of code compiled
for the .NET platform. The CLR has two interesting features. Firstly, its specification has
been opened up so that it can be ported to non-Windows platforms. Secondly, any
number of different languages can be used to manipulate the .NET framework classes,
and the CLR will support them. This has led one commentator to claim that under .NET
the language one uses is a 'lifestyle choice'.
Not all of the supported languages fit entirely neatly into the .NET framework,
however (in some cases the fit has been somewhat Procrustean). But the one language
that is guaranteed to fit in perfectly is C#. This new language, a successor to C++, has
been released in conjunction with the .NET framework, and is likely to be the language
of choice for many developers working on .NET applications.
Asp.net
ASP.NET is a programming framework built on the common language runtime that
can be used on a server to build powerful Web applications. ASP.NET offers several
important advantages over previous Web development models:
ASP .NET has better language support, a large set of new controls and XML
based components, and better user authentication.
Language Support
User Authentication
(You can still do your custom login page and custom user checking).
ASP .NET allows for user accounts and roles, to give each user (with a given
role) access to different server code and executables.
High Scalability
Much has been done with ASP .NET to provide greater scalability. Server to server
communication has been greatly enhanced, making it possible to scale an application
over several servers. One example of this is the ability to run XML parsers, XSL
transformations and even resource hungry session objects on other servers.
Compiled Code
The first request for an ASP .NET page on the server will compile the ASP .NET code
and keep a cached copy in memory. The result of this is greatly increased performance.
Easy Configuration
Configuration files can be uploaded or changed while the application is running. No need
to restart the server. No more metabase or registry puzzle.
Easy Deployment
No more server restart to deploy or replace compiled code. ASP .NET simply redirects all
new requests to the new code.
Compatibility
ASP .NET is not fully compatible with earlier versions of ASP, so most of the old
ASP code will need some changes to run under ASP .NET.
To overcome this problem, ASP .NET uses a new file extension ".aspx". This will
make ASP .NET applications able to run side by side with standard ASP applications on
the same server.
HTML Server Controls
HTML elements in ASP.NET files are, by default, treated as text. To make these
elements programmable, add a runat="server" attribute to the HTML element. This
attribute indicates that the element should be treated as a server control.
Note: All HTML server controls must be within a <form> tag with the runat="server"
attribute!
Like HTML server controls, Web server controls are also created on the server and they
require a runat="server" attribute to work. However, Web server controls do not
necessarily map to any existing HTML elements and they may represent more complex
elements.
A Validation server control is used to validate the data of an input control. If the data
does not pass validation, it will display an error message to the user.
Most applications need data access at one point of time making it a crucial component
when working with applications. Data access is making the application interact with a
database, where all the data is stored. Different applications have different requirements
for database access. VB .NET uses ADO .NET (Active X Data Object) as it's data access
and manipulation protocol which also enables us to work with data on the Internet. Let's
take a look why ADO .NET came into picture replacing ADO.
Evolution of ADO.NET
The first data access model, DAO (data access model) was created for local databases
with the built-in Jet engine which had performance and functionality issues. Next came
RDO (Remote Data Object) and ADO (Active Data Object) which were designed for
Client Server architectures but soon ADO took over RDO. ADO was a good architecture
but as the language changes so is the technology. With ADO, all the data is contained in a
recordset object which had problems when implemented on the network and penetrating
firewalls. ADO was a connected data access, which means that when a connection to the
database is established the connection remains open until the application is closed.
Leaving the connection open for the lifetime of the application raises concerns about
database security and network traffic. Also, as databases are becoming increasingly
important and as they are serving more people, a connected data access model makes us
think about its productivity. For example, an application with connected data access may
do well when connected to two clients, the same may do poorly when connected to 10
and might be unusable when connected to 100 or more. Also, open database connections
use system resources to a maximum extent making the system performance less effective.
ADO.NET
To cope up with some of the problems mentioned above, ADO .NET came into
existence. ADO .NET addresses the above mentioned problems by maintaining a
disconnected database access model which means, when an application interacts with the
database, the connection is opened to serve the request of the application and is closed as
soon as the request is completed. Likewise, if a database is Updated, the connection is
opened long enough to complete the Update operation and is closed.
Also, ADO .NET when interacting with the database uses XML and converts all
the data into XML format for database related operations making them more efficient.
DataSet
Data Provider
The Data Provider is responsible for providing and maintaining the connection to
the database. A DataProvider is a set of related components that work together to provide
data in an efficient and performance driven manner. The .NET Framework currently
comes with two DataProviders: the SQL Data Provider which is designed only to work
with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to
connect to other types of databases like Access and Oracle. Each DataProvider consists of
the following component classes:
A connection object establishes the connection for the application with the database. The
command object provides direct execution of the command to the database. If the
command returns more than a single value, the command object returns a DataReader to
provide the data. Alternatively, the DataAdapter can be used to fill the Datasetdatabase
be updated using the command object or the DataAdapter.
Component classes that make up the Data Providers
The Connection Object
The Connection object creates the connection to the database. Microsoft Visual Studio
.NET provides two types of Connection classes: the SqlConnection object, which is
designed specifically to connect to Microsoft SQL Server 7.0 or later, and the
OleDbConnection object, which can provide connections to a wide range of database
types like Microsoft Access and Oracle. The Connection object contains all of the
information required to open a connection to the database.
The Data Reader object provides a forward-only, read-only, connected stream recordset
from a database. Unlike other components of the Data Provider, DataReader objects
cannot be directly instantiated. Rather, the DataReader is returned as the result of the
Command object's ExecuteReader method. The SqlCommand.ExecuteReader method
returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns
an OleDbDataReader object.
The DataReader can provide rows of data directly to application logic when you do not
need to keep the data cached in memory. Because only one row is in memory at a time,
the DataReader provides the lowest overhead in terms of system performance but
requires the exclusive use of an open Connection object for the lifetime of the
DataReader.
The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is
essentially the middleman facilitating all communication between the database and a
DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the
database with it's Fill method. After the memory-resident data has been manipulated, the
DataAdapter can commit the changes to the database by calling the Update method. The
DataAdapter provides four properties that represent database commands:
SelectCommand
InsertCommand
DeleteCommand
UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the
database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is
executed.
BACK END OF SOFTWARE:
SQL Introduction:
SQL stands for Structured Query Language and is used to pull information from
databases.SQL offers many features making it a powerfully diverse language that also
offers a secure way to work with databases.
SQL alone can input, modify, and drop data from databases. In this tutorial we use
command line examples to show you the basics of what we are able to accomplish. With
the use of web languages such as HTML and PHP, SQL becomes an even greater tool for
building dynamic web pages.
Database:
A database is nothing more than an empty shell, like a vacant warehouse. It offers no real
functionality what so ever, other than holding a name. Tables are the next tier of our tree
offering a wide scope of functionality. If you follow our warehouse example, a SQL table
would be the physical shelving inside our vacant warehouse. Each SQL table is capable
of housing 1024 columns(shelves). Depending on the situation, your goods may require
reorganization, reserving, or removal. SQL tables can be manipulated in this same way or
in any fashion the situation calls for.
SQL Server:
Microsoft's SQL Server is steadily on the rise in the commercial world gaining popularity
slowly. This platform has a GUI "Windows" type interface and is also rich with
functionality. A free trial version can be downloaded at the Microsoft web site, however
it is only available to Windows users.
SQL Queries:
Queries are the backbone of SQL. Query is a loose term that refers to a widely available
set of SQL commands called clauses. Each clause (command) performs some sort of
function against the database. For instance, the create clause creates tables and databases
and the select clause selects rows that have been inserted into your tables. We will dive
deeper in detail as this tutorial continues but for now let's take a look at some query
structure.
Views:
Views are nothing but saved SQL statements, and are sometimes referred as “Virtual
Tables”. Keep in mind that Views cannot store data (except for Indexed Views); rather
they only refer to data present in tables.
AS
GO
There are two important options that can be used when a view is created. They are
SCHEMABINDING and ENCRYPTION. We shall have a detailed look on both of these,
shortly, but first of all, let’s take a look of an example of a typical view creation
statement without any options.
Data storage
The main unit of data storage is a database, which is a collection of tables with
typed columns. SQL Server supports different data types, including primary types such as
Integer, Float, Decimal, Char (including character strings), Varchar (variable length
character strings), binary (for unstructured blobs of data), Text (for textual data) among
others. It also allows user-defined composite types (UDTs) to be defined and used. SQL
Server also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). A database can also contain other objects including
views, stored procedures, indexes and constraints, in addition to tables, along with a
transaction log. A SQL Server database can contain a maximum of 2 31 objects, and can
span multiple OS-level files with a maximum file size of 2 20 TB. The data in the database
are stored in primary data files with an extension .mdf. Secondary data files, identified
with an .ndf extension, are used to store optional metadata. Log files are identified with
the .ldf extension.
For physical storage of a table, its rows are divided into a series of partitions
(numbered 1 to n). The partition size is user defined; by default all rows are in a single
partition. A table is split into multiple partitions in order to spread a database over a
cluster. Rows in each partition are stored in either B-tree or heap structure. If the table
has an associated index to allow fast retrieval of rows, the rows are stored in-order
according to their index values, with a B-tree providing the index. The data is in the leaf
node of the leaves, and other nodes storing the index values for the leaf data reachable
from the respective nodes. If the index is non-clustered, the rows are not sorted according
to the index keys. An indexed view has the same storage structure as an indexed table. A
table without an index is stored in an unordered heap structure. Both heaps and B-trees
can span multiple allocation units.
Buffer management
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be
buffered in-memory, and the set of all pages currently buffered is called the buffer cache.
The amount of memory available to SQL Server decides how many pages will be cached
in memory. The buffer cache is managed by the Buffer Manager. Either reading from or
writing to any page copies it to the buffer cache. Subsequent reads or writes are
redirected to the in-memory copy, rather than the on-disc version.
The page is updated on the disc by the Buffer Manager only if the in-memory
cache has not been referenced for some time. While writing pages back to disc,
asynchronous I/O is used whereby the I/O operation is done in a background thread so
that other operations do not have to wait for the I/O operation to complete. Each page is
written along with its checksum when it is written.
When reading the page back, its checksum is computed again and matched with
the stored version to ensure the page has not been damaged or tampered with in the mean
time.
SQL Server ensures that any change to the data is ACID-compliant, i.e., it uses
transactions to ensure that any operation either totally completes or is undone if fails, but
never leave the database in an intermediate state. Using transactions, a sequence of
actions can be grouped together, with the guarantee that either all actions will succeed or
none will. SQL Server implements transactions using a write-ahead log. Any changes
made to any page will update the in-memory cache of the page, simultaneously all the
operations performed will be written to a log, along with the transaction ID which the
operation was a part of.
Each log entry is identified by an increasing Log Sequence Number (LSN) which
ensure that no event overwrites another. SQL Server ensures that the log will be written
onto the disc before the actual page is written back. This enables SQL Server to ensure
integrity of the data, even if the system fails. If both the log and the page were written
before the failure, the entire data is on persistent storage and integrity is ensured. If only
the log was written (the page was either not written or not written completely), then the
actions can be read from the log and repeated to restore integrity.
If the log wasn't written, then also the integrity is maintained, even though the
database is in a state when the transaction as if never occurred. If it was only partially
written, then the actions associated with the unfinished transaction are discarded. Since
the log was only partially written, the page is guaranteed to have not been written, again
ensuring data integrity. Removing the unfinished log entries effectively undoes the
transaction. SQL Server ensures consistency between the log and the data every time an
instance is restarted.
Concurrency and locking
SQL Server allows multiple clients to use the same database concurrently. As
such, it needs to control concurrent access to shared data, to ensure data integrity - when
multiple clients update the same data, or clients attempt to read data that is in the process
of being changed by another client. SQL Server provides two modes of concurrency
control: pessimistic concurrency and optimistic concurrency. When pessimistic
concurrency control is being used, SQL Server controls concurrent access by using locks.
Locks can be either shared or exclusive. Exclusive lock grants the user exclusive access
to the data - no other user can access the data as long as the lock is held. Shared locks are
used when some data is being read - multiple users can read from data locked with a
shared lock, but not acquire an exclusive lock. The latter would have to wait for all
shared locks to be released. Locks can be applied on different levels of granularity - on
entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on
the entire index or on index leaves.
SQL Server uses them for DMVs and other resources that are usually not busy.
SQL Server also monitors all worker threads that acquire locks to ensure that they do not
end up in deadlocks - in case they do, SQL Server takes remedial measures, which in
many cases is to kill one of the threads entangled in a deadlock and rollback the
transaction it started. To implement locking, SQL Server contains the Lock Manager.
The Lock Manager maintains an in-memory table that manages the database
objects and locks, if any, on them along with other metadata about the lock. Access to
any shared object is mediated by the lock manager, which either grants access to the
resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is
similar to the multiversion concurrency control used in other databases. The mechanism
allows a new version of a row to be created whenever the row is updated, as opposed to
overwriting the row, i.e., a row is additionally identified by the ID of the transaction that
created the version of the row. Both the old as well as the new versions of the row are
stored and maintained, though the old versions are moved out of the database into a
system database identified as Tempdb.
When a row is in the process of being updated, any other requests are not blocked
(unlike locking) but are executed on the older version of the row. If the other request is an
update statement, it will result in two different versions of the rows - both of them will be
stored by the database, identified by their respective transaction IDs.
Data retrieval
The main mode of retrieving data from an SQL Server database is querying for it.
The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL
Server shares with Sybase SQL Server due to its legacy. The query declaratively specifies
what is to be retrieved. It is processed by the query processor, which figures out the
sequence of steps that will be necessary to retrieve the requested data.
It then decides which sequence to access the tables referred in the query, which
sequence to execute the operations and what access method to be used to access the
tables. For example, if the table has an associated index, whether the index should be
used or not - if the index is on a column which is not unique for most of the columns (low
"selectivity"), it might not be worthwhile to use the index to access the data. Finally, it
decides whether to execute the query concurrently or not.
SQL Server also allows stored procedures to be defined. Stored procedures are
parameterized T-SQL queries that are stored in the server itself (and not issued by the
client application as is the case with general queries). Stored procedures can accept
values sent by the client as input parameters, and send back results as output parameters.
They can also call other stored procedures, and can be selectively provided access
to. Unlike other queries, stored procedures have an associated name, which is used at
runtime to resolve into the actual queries. Also because the code need not be sent from
the client every time (as it can be accessed by name), it reduces network traffic and
somewhat improves performance. Execution plans for stored procedures are also cached
as necessary.
SQL CLR
Microsoft SQL Server 2005 includes a component named SQL CLR via which it
integrates with .NET Framework. Unlike most other applications that use .NET
Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory,
threading and resource management requirements of .NET Framework are satisfied by
SQLOS itself, rather than the underlying Windows operating system.
SQLOS provides deadlock detection and resolution services for .NET code as
well. With SQL CLR, stored procedures and triggers can be written in any managed
.NET language, including C# and VB.NET. Managed code can also be used to define
UDTs which can be persisted in the database. Managed code is compiled to .Net
assemblies and after being verified for type safety, registered at the database. After that,
they can be invoked like any other procedure. However, only a subset of the Base Class
Library is available, when running code under SQL CLR. Most APIs relating to user
interface functionality are not available.
When writing code for SQL CLR, data stored in SQL Server databases can be
accessed using the ADO.NET APIs like any other managed application that accesses
SQL Server data. However, doing that creates a new database session, different from the
one in which the code is executing. To avoid this, SQL Server provides some
enhancements to the ADO.NET provider that allows the connection to be redirected to
the same session which already hosts the running code. Such connections are called
context connections and are set by setting context connection parameter to true in the
connection string. SQL Server also provides several other enhancements to the
ADO.NET API, including classes to work with tabular data or a single row of data as
well as classes to work with internal metadata about the data stored in the database. It
also provides access to the XML features in SQL Server, including XQuery support.
These enhancements are also available in T-SQL Procedures in consequence of the
introduction of the new XML Datatype (query, value, nodes functions).
Services
SQL Server also includes an assortment of add-on services. While these are not
essential for the operation of the database system, these provide value added services on
top of the core database management system. These services either run as a part of some
SQL Server component or out-of-process as Windows Service and presents their own
API to control and interact with them.
Service Broker
The Service Broker, which runs as a part of the database engine, provides a
reliable messaging and message queuing platform for SQL Server applications. Used
inside an instance, it is used to provide an asynchronous programming environment. For
cross instance applications, Service Broker communicates over TCP/IP and allows the
different components to be synchronized together, via exchange of messages.
Replication Services
SQL Server Replication Services are used by SQL Server to replicate and
synchronize database objects, either in entirety or a subset of the objects present, across
replication agents, which might be other database servers across the network, or database
caches on the client side. Replication follows a publisher/subscriber model, i.e., the
changes are sent out by one database server ("publisher") and are received by others
("subscribers"). SQL Server supports three different types of replication:
Transaction replication
Each transaction made to the publisher database (master database) is synced out to
subscribers, who update their databases with the transaction. Transactional
replication synchronizes databases in near real time.
Merge replication
Changes made at both the publisher and subscriber databases are tracked, and
periodically the changes are synchronized bi-directionally between the publisher
and the subscribers. If the same data has been modified differently in both the
publisher and the subscriber databases, synchronization will result in a conflict
which has to be resolved - either manually or by using pre-defined policies.
Snapshot replication
Snapshot replication published a copy of the entire database (the then-snapshot of
the data) and replicates out to the subscribers. Further changes to the snapshot are
not tracked.
Analysis Services
SQL Server Analysis Services adds OLAP and data mining capabilities for SQL
Server databases. The OLAP engine supports MOLAP, ROLAP and HOLAP storage
modes for data. Analysis Services supports the XML for Analysis standard as the
underlying communication protocol. The cube data can be accessed using MDX queries.
Data mining specific functionality is exposed via the DMX query language. Analysis
Services includes various algorithms – Decision trees, clustering algorithm, Naïve Bayes
algorithm, time series analysis, sequence clustering algorithm, linear and logistic
regression analysis, and neural networks - for use in data mining.
Reporting Services
Notification Services
Introduced and available only with Sql Server 2005, SQL Server Notification
Services is a platform for generating notifications, which are sent to Notification Services
subscribers. A subscriber registers for a specific event or transaction (which is registered
on the database server as a trigger); when the event occurs, Notification Services uses
Service Broker to send a message to the subscriber informing about the occurrence of the
event.
Integration Services
SQL Server Integration Services is used to integrate data from different data
sources. It is used for the ETL capabilities for SQL Server for data warehousing needs.
Integration Services includes GUI tools to build data extraction workflows integration
various functionality such as extracting data from various sources, querying data,
transforming data including aggregating, duplication and merging data, and then loading
the transformed data onto other sources, or sending e-mails detailing the status of the
operation.
System Analysis
Assuming that a new system is to be developed, the next phase is system analysis.
Analysis involved a detailed study of the current system, leading to specifications of a
new system. Analysis is a detailed study of various operations performed by a system and
their relationships within and outside the system. During analysis, data are collected on
the available files, decision points and transactions handled by the present system.
Interviews, on-site observation and questionnaire are the tools used for system analysis
All procedures, requirements must be analyzed and documented in the form of
detailed data flow diagrams (DFDs), data dictionary, logical data structures and miniature
specifications. System Analysis also includes sub-dividing of complex process involving
the entire system, identification of data store and manual processes.
study is the test of system proposal according to its workability, Impact on the
organization ability to meet user’s needs, and effective use of resources. The result of
detailing the nature and scope of the proposed solution .The main objective of a
feasibility study is to test the technical, social and economic feasibility of developing a
computer system. This is done by investigation the existing system in the area under
investigation and generating ideas about a new system. On studying the feasibility of the
system, three major considerations are dealt with, to find whether the automation of the
TECHNICAL FEASIBILITY
A system that can be developed technically and that will be used if installed must
still be a good invested for the organization. The assessment of technical feasibility must
programs, procedures. Technical feasibility centers around the existing computer system
and to what extend it can support the proposed system. The current technical resources,
which are available in the organization, are capable of handling the requirements in the
aspect of technical staff. Technical feasibility also involves the investigations such as
whether the proposed system provides adequate response to inquiries and whether it can
expectations of various categories of people concerned with it. Besides some technical
experts who also have the computer knowledge are to be trained over the project enabling
them to take care of the technical problems. The system is developed to meet the
demands of the existing . The system is also reliable and easy to use. So it is found that
ECONOMIC FEASIBILITY
The technique of cost benefit analysis is often used as a basis for assessing
economic feasibility. Economic feasibility deals with the analysis of costs against
benefits (i.e) whether the benefits to the enjoyed due to the new system are worthy when
compared to the costs to be spent on the system. Economic analysis is the nose frequently
used technique for evaluating the cost effectiveness of the proposed project. More
commonly know as cost / benefit analysis, the procedure is to determine whether the
project have the benefits and savings. Further compared with the existing-costs in the
The cost when compared to the benefits of the system are much low. Hence the
records to some other important work is possible which may be taken as the added
advantages of this project. Accurate and reliable information exchange with reasonable
cost is possible. Taking this into consideration, the system is found to be economically
feasible.
OPERATIONAL FEASIBILITY
Proposed projects are beneficial only if they can be turned into information
systems that will meet the company’s operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. There are
The following aspects are considered during the time of feasibility study:
The operational skills that will be required for entering data and the training to be given
TIME FEASIBILITY
The only point is “Can the project be developed in time so that it can be used
before any new proposal come to the company. The software is feasible with time as it
will be developed in the estimated time limit.
RESOURCE FEASIBILITY
The issue of consideration here is “does the developer has enough resources to
develop such software and to succeed in it?” i.e. the resources that would be required to
develop and implement the software. The resources not only include the hardware,
software, and technology but also require money, men power. It also takes into
consideration the resources required at client side when the software has been installed.
BEHAVIORAL FEASIBILITY
People are inherently resistant to change, and because of any new thing changes
are made. Evolution of any new system over existing system is a reason for resistant by
people. So, for a project the respective behavioral feasibility is calculated, so as to have
complete knowledge of what problems would be faced after implementing the software.
In this software, the user can never face any kind of problems, as the software is highly
user friendly.
Software Requirements Specification:
Hardware Interfaces
Processor Type : Pentium -IV
Speed : 2.4 GHZ
Ram : 256 MB RAM
Hard disk : 20 GB HD
Software Interfaces
Label4.Text = DateTime.Today.ToShortDateString();
Label7.Text = DateTime.Now.ToShortTimeString();
}
}
protected void Command1_Click(object sender, EventArgs e)
{
if (TextBox1.Text == TextBox3.Text)
{
Label6.Visible = true;
Label6.Text = " the transaction's Account id's are same so
unable to prefare transaction";
}
else
{
ch();
}
}
public void from()
{
frm = "withdraw";
con.Open();
com = new SqlCommand("sp_ins41", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox1.Text.ToString());
com.Parameters.AddWithValue("@amo", TextBox2.Text.ToString());
com.Parameters.AddWithValue("@dat", Label4.Text.ToString());
com.Parameters.AddWithValue("@tim", Label7.Text.ToString());
com.Parameters.AddWithValue("@tran", frm.ToString());
com.Parameters.AddWithValue("@to", TextBox3.Text.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void to1()
{
con.Close();
to = "deposit";
con.Open();
com = new SqlCommand("sp_ins41", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox3.Text.ToString());
com.Parameters.AddWithValue("@amo", TextBox2.Text.ToString());
com.Parameters.AddWithValue("@dat", Label4.Text.ToString());
com.Parameters.AddWithValue("@tim", Label7.Text.ToString());
com.Parameters.AddWithValue("@tran", to.ToString());
com.Parameters.AddWithValue("@to", TextBox1.Text.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void upd1()
{
con.Close();
d = Convert.ToString(c);
con.Open();
com = new SqlCommand("sp_Up1", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox1.Text.ToString());
com.Parameters.AddWithValue("@q", d.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void upd2()
{
con.Close();
q = Convert.ToString(t);
con.Open();
com = new SqlCommand("sp_Up1", con);
com.CommandType = CommandType.StoredProcedure;
com.Parameters.AddWithValue("@acc", TextBox3.Text.ToString());
com.Parameters.AddWithValue("@q", q.ToString());
com.ExecuteNonQuery();
//con.Close();
}
public void fun()
{
// dr.Close();
con.Close();
from();
to1();
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox1.Text + "'", con);
dr = com.ExecuteReader();
while (dr.Read())
{
a = dr[0].ToString();
}
//dr.Close();
//con.Close();
b = Convert.ToInt32(TextBox2.Text);
o = Convert.ToInt32(a.ToString());
c = o - b;
upd1();
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox3.Text + "'", con);
dr = com.ExecuteReader();
while (dr.Read())
{
p = dr[0].ToString();
}
// dr.Close();
//con.Close();
r = Convert.ToInt32(TextBox2.Text);
s = Convert.ToInt32(p.ToString());
t = r + s;
upd2();
}
public void fun1()
{
fun();
}
public void check()
{
con.Close();
con.Open();
com = new SqlCommand("select * from tblcurmain where accountid =
'" + TextBox3.Text.ToString() + "' ", con);
dr = com.ExecuteReader();
if (dr.Read())
{
//if (TextBox3.Text.ToString() == dr[0].ToString())
//{
fun1();
}
else
{
Label6.Visible = true;
Label6.Text = "to transfer account id is invalid";
// dr.Close();
// con.Close();
}
public void ch()
{
//string ca = Session["cur"].ToString();
con.Close();
con.Open();
com = new SqlCommand("select accountid,pwd from tblcurmain
where accountid='"+TextBox1.Text.ToString()+"'", con);
dr = com.ExecuteReader();
if (dr.Read())
{
if (TextBox1.Text.ToString() == dr[0].ToString() &&
TextBox4.Text.ToString() == dr[1].ToString())
{
bal();
else
{
Label6.Visible = true;
Label6.Text = "invalid Account Number or Password";
}
}
else
{
Label6.Visible = true;
Label6.Text = "invalid Account Number or Password";
}
dr.Close();
con.Close();
}
public void bal()
{
int u,u1;
con.Close();
con.Open();
com = new SqlCommand("select balance from tblcurmain where
accountid='" + TextBox1.Text.ToString() + "'", con);
dr = com.ExecuteReader();
if (dr.Read())
{
u = Convert.ToInt32(dr[0].ToString());
u1 = Convert.ToInt32(TextBox2.Text);
if (u < u1)
{
Label6.Visible = true;
Label6.Text = "your balance to low to transaction";
}
else
{
check();
Label6.Visible = true;
Label6.Text = "your transaction completed successfully";
}
else
{
Label6.Visible = true;
Label6.Text = "invalid accoint id ";
}
//dr.Close();
//con.Close();
}
view Page
using System;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Web;
using System.Web.Mobile;
using System.Web.SessionState;
using System.Web.UI;
using System.Web.UI.MobileControls;
using System.Web.UI.WebControls;
using System.Web.UI.HtmlControls;
using System.Data.SqlClient;
}
}
}
System Testing
Testing Stages
Unit Testing:
This test demonstrates that a single program, module or unit of code function as
designed. The unit testing is normally white box oriented, and the step can be conducted
in parallel for multiple modules.
Integration Testing:
This test is done to validate the multiple parts of the system interact according to
the system design. Each integrated portion of the system is ready for testing with other
parts of the system. The objective is to take unit tested modules and built a program
structure that has been dictated by design.
System testing:
This test simulates operation of the entire system and confirms that it runs
correctly. The total system is also tested for recovery and fall back after various major
failures to ensure that no data is lost during the emergency
User Acceptance Testing:
Internal staff, customers, vendor or other users interact with the system to ensure
that it will function as desired regardless of the system requirements. An acceptance test
has the objective of selling the user on the validity and reliability of the system. It verifies
that the system’s procedures operate to system specification and that the integrity of vital
data is maintained.
Conclusion
The Software/system was successfully developed to meet the needs of the client.
It was found to provide all the features that required for the organization. The accuracy
and complexity of the software are also ensured.
The System provides benefits such as user-friendly environment, effective
problem resolution and powerful search mechanisms. There is no limitations for the
Concurrent users.
Apart from the above benefits, the system also holds the benefits provided by the
technologies used in the development. They are:
Flexibilities
The System is more flexible in the sense that the changing requirements of the
user can be easily added to the application thereby making the application recent in future
too.
Since, the Designing of the Screens is by using the .NET Technology, anyone
knows the .NET Designing steps, can continue the process from which anyone else has
quit from.
Since the system is a Web-based one, the client can access the very same server
from anywhere in the Globe.
Enhancements
All software products aim at lesser degree of maintenance. This is quite natural,
but enhancements also pour in, in due course of time, which is unavoidable Better
technologies developers aiming for sophistication and increasing need of customers are
all part and parcel of the software.
Reference:
BIBLIOGRAPHY
SQL: The Complete Reference, Second Edition. by James R Groff & Paul N.
Weinberg
www.DyessConsulting.Com