Jsaer2015 02 03 09 12
Jsaer2015 02 03 09 12
com
ISSN: 2394-2630
Review Article CODEN(USA): JSERBR
Jitendra Kumawat
Abstract Grid computing is a term referring to the combination of computer resources from multiple
administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-
interactive workloads that involve a large number of files. What distinguishes grid computing from conventional
high performance computing systems such as cluster computing is that grids tend to be more loosely coupled,
heterogeneous, and geographically dispersed. Although a grid can be dedicated to a specialized application, it is
more common that a single grid will be used for a variety of different purposes. Grids are often constructed with
the aid of general-purpose grid software libraries known as middleware.
Grid size can vary by a considerable amount. Grids are a form of distributed computing whereby a “super virtual
computer” is composed of many networked loosely coupled computers acting together to perform very large
tasks. Furthermore, “distributed” or “grid” computing, in general, is a special type of parallel computing that
relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected
to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in
contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-
speed computer bus.
Keywords Sulphurhexafluoride (SF6 gas), particle movement.
Introduction
The growth of the Internet, along with the availability of powerful computers and high-speed networks as low-
cost commodity components, is changing the way scientists and engineers do computing, and are also changing
how society in general manages information and information services. These new technologies have enabled the
clustering of a wide variety of geographically distributed resources, such as supercomputers, storage systems,
data sources, instruments, and special devices and services, which can then be used as unified resources.
Furthermore, they have enabled seamless access to and interaction among these distributed resources, services
applications and data. The new paradigm that has evolved is popularly termed as “Grid” computing. Grid
Computing and the utilization of the global Grid infrastructure have presented significant challenges at all levels
including conceptual and implementation models, application formulation and development, programming
systems, infrastructures and services, resource management, networking and security, and led to the
development of a global research community.
Increased network bandwidth, more powerful computers, and the acceptance of the Internet have driven the on-
going demand for new and better ways to compute. Commercial enterprises, academic institutions, and research
organizations continue to take advantage of these advancements, and constantly seek new technologies and
practices that enable them to seek new ways to conduct business. However, many challenges remain. Increasing
pressure on development and research costs, faster time-to-market, greater throughput, and improved quality
and innovation are always foremost in the minds of administrators - while computational needs are outpacing
the ability of organizations to deploy sufficient resources to meet growing workload demands.
On top of these challenges is the need to handle dynamically changing workloads. The truth is, flexibility is key.
In a world with rapidly changing markets, both research institutions and enterprises need to quickly provide
compute power where it is needed most. Indeed, if systems could be dynamically created when they are needed,
teams could harness these resources to increase innovation and better achieve their objectives.
traced to the 1980‟s and early 1990‟s and the tremendous amounts of research being done on parallel
programming and distributed systems. Parallel computers is a variety of architectures had become commercially
available, and networking hardware and software were becoming more widely deployed. To effectively program
thee new parallel machines, a long list of parallel programming languages and tools were being developed
evaluated [2]. This list included Linda, Concurrent Prolog, BSP, Occam, Programming Composition Notion,
Fortan-D, and Compositional C++, pC++, Mentat, Nexus, Lightweight threads, and the Parallel Virtual
Machine, to name just a few.
“The Metacomputer is similar to an electricity grid.
When you turn on your light, you don‟t care where the
power comes from; you just want the light to come on.
The same is true for computer users. They want their
job to run on the best possible machine and they really
Don‟t care how that gets done.”
The trials and tribulations of such as arduous demonstration paid-off since it crystallized for a much broader
segment of the scientific community, what was possible and what needed to be done [3]. In early 1996, the
Globus Project officially got under way after being proposed to ARPA in November 1994. The process and
communication middleware system called Nexus [4] was originally built by Argonne National Laboratory to
essentially to be a compiler target and provide remote service requests across heterogeneous machines for
application codes written in a higher-level language. The goal of the Globus Project was to build a global Nexus
that would provide support for resource discovery, resource composition, data access, and authentication etc.
The first Globus applications were demonstrated at Supercomputing.
of many organization may be fundamentally oppose to this. Some Organizational units may jealously guard
their machines or data out of a perceived economic or security threat.
Short Term
For the short term (within the next two years), Grid is most likely to be introduced into large organizations as
internal „Enterprise grids‟, i.e. built behind firewalls and used within a limited trust domain, perhaps with
controlled links to external grids. A good analogy would be the adoption into business of the Internet, where the
first step was often the roll out of a secure internal company „Intranet‟, with a gradual extension of capabilities
(and hence opportunity for misuse) towards fully ubiquitous Internet access. Centralized management is
expected to be the only way to guarantee qualities of service. Typically users of this early technology will be
expecting to achieve IT cost reduction, increased efficiency, some innovation and flexibility in business
processes. At the same time the distinction between web services and grid services is expected to disappear,
with the capabilities of one merging into the other and the interoperability between the two standards being
taken for granted.
Medium Term
In the midterm (say a five year timeframe) expect to see wider adoption - largely for resource virtualization and
mass access. The technology will be particularly appropriate for applications that utilize broadband and
mobile/air interfaces, such as on-line gaming, „visualization-on-demand‟ and applied industrial research. The
emphasis will move from use within a single organization to use across organizational domains and within
Virtual Organizations, requiring issues such as ownership, management and accounting to be handled within
trusted partnerships. There will be a shift in value from provision of computer power to provision of information
and knowledge. At the same time open standards based tooling for building service oriented applications are
likely to emerge and Grid technology will start to be incorporated into off-the-shelf‟ products. This will lead to
standard consumer access to virtualized compute and data resources, enabling a whole new range of consumer
services to be delivered.
Long Term
In the longer term, Grid is likely to become a prerequisite for business success - central to business processes,
new types of service, and a central component of product development and customer solutions. A key business
change will be the establishment of trusted service providers, probably acting on a global scale and disrupting
the current supply chains and regulatory environments.
Refernces
1. F.J.Corbat and V.A.Vyssotshy,”Introduction and overview of the Multics System”, Proc. AFIPS 1965
FJCC, 27(1), 1965, 185-196.
2. D.Skillicorn and D.Talia,”Models and Languages for Parallel Computation”, ACM Computing
Surveys, 30(2), 1998, 123-169.
3. R.Stevens, P.Woodward, T.DeFanti and C.Catlett, “From the I-WAY to the National Technology
Grid”, Communication of the ACM,40(11), 1997, 51-60.
4. I.Foster, C.Kesselman and S.Tuecke,”The Nexcus Task-Parallel Runtime System,” in Proceedings of
First International Workshop on Parallel Processing,1994,457-462.