I Jcs It 20140503122
I Jcs It 20140503122
I.
INTRODUCTION
The World Wide Web (WWW) is increasely being used by
people all over the world and this lead to an enlarge in traffic
in network. Consequently, this high traffic will increase the
server load, latency time and reduce the bandwidth for
competing request. To overcome this situation, Web caching
technique has been used. Web cache reduces the high traffic
over the internet so that user can access the web content
faster. The main purpose of cache is to place the copy of
object near to the client, so that web user can access the
object easily, without the request going to the web server.
There are different points at which we can set up a cache such
as browser, proxy server and close to server. When a user
request for the web page, firstly it is checked in cache, if the
requested web page is available then it send back to the user.
If the web page is not found in the cache, then the request
redirect the web server and sends the response to the client. In
between cache store the requested web page.
Because of the partial size of memory of cache, it becomes so
much hard to save all objects in the memory. So, to resolve
this problem, we have to use the page replacement
algorithms. These algorithms are being used to evict the
object from the cache and create a space for the new web
Web
server
Hitsandmisses
resolved
Hitsandmisses
resolved
ISP
ProxyWeb
Cache
Entry
web
cache
Hit
resolved
Direct
resolved
Entry
web
cache
Entry
web
cache
Hit
resolved
Hitsandmisses
resolved
Consumer
Consumer
Local
cache
Consumer
3232
Kapil Arora et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3232 - 3235
III.
PERFORMANCE METRICS
To accomplish the objective, page replacement policy
depends on the several key metrics. Based on this
performance metric we can compare the performance of
different algorithms. These performance metrics are plays a
very important role in the web cache performance calculation.
Such replacement policy aims to optimize performance
metrics. To evaluate the performance of the cache,
performance metrics are being used. The most commonly
used are Hit rate, byte hit rate bandwidth saved, delay saving
ratio are most commonly used.
A. Hit rate
The percentage of all requests object which are found in the
cache instead of transferred from the requested server.
B. Byte hit
The percentage of all data that is transfer straight from the
cache rather than from requested server.
Metric
Hit Ratio
Byte Hit Ratio
Saved Bandwidth
Delay Saving Ratio
Definition
h
f
S .h
S .f
This is directly related to byte hit
ratio.
d .h
d .f
d . 1 h |f
R
Average downloads
time.
Notation:
Si =size of an object i.
fi =The total number of request for the object i
hi =a number of object hit for object i.
di =The delay which are occur to retrieve the object from
server.
R= Set of object which are accessed.
||R||=size of R
Fig 2.Performance metrics [5]
C. Bandwidth Saved
Bandwidth saved is used to reduce the amount of the
bandwidth consumed.
D. Delay Saving Ratio
The measure the latency of fetching an object.
These performance metrics are the heart of web caching
algorithm. These performance metrics are used to evaluate the
performance of the replacement algorithms with respect to
object which are present and requested in memory of cache,
the saved bytes due to no retransmission, the decrease in
latency to retrieve an object which is requested..
I.
Performance metrics parameters
There are some parameters which we inured to evaluate the
performance.
A. Throughput
A number of requests which is generate by the client per
second
B. Hit ratio
Ratio of the total amount of object found in the cache to the
total number of object is requested.
C. Byte hit ratio
Ration of the total amount of bytes which are served by the
cache to total number of bytes which are sent to clients.
D. Cache age
Time after which the cache becomes full.
E. Downtime
Time taken to improve from the cache failures. [1].
II.
Category of cache page replacement policy
Cache page replacement policies is classify in the following
category:
A. Recency-based
This category of algorithm work on the time basis i.e. time to
access the last references of object. The algorithm of this
category is LRU (Least Recently Used) and has been
functional in a number of proxy caching servers.
B. Size-based
The size based cache replacement policy consider the object
size as the basic parameter. LFU-Size based algorithm is
come under this category.
C. Frequency-based
The Frequency-based cache replacement policy is work on
the frequency of the object means that the number of times
an object is accessed. The algorithm of this category is LFU .
D. Function-based
A cost based function is used to determine the functional
based algorithms. It involves the multiple parameters which
are related to performance metric we used. Most indicative
algorithm of this category is Greedy-Dual Size. [3].
VI.
CACHE REPLACEMENT POLICY
Cache replacement policy plays a crucial part in the web
caching. To achieve the high sophisticated cache mechanism,
these replacement algorithms is necessary. These replacement
policies helpful in evicted the object from the cache and build
a new space for the incoming object. Due to limited size of
the cache, a cache cannot store the entire requested object. So,
we use the cache replacement policy to let the room for new
document. This is applicable when the cache is full of object
and then we have to insert the new object in to cache. So we
have to evict object from the cache to make space. There are
different cache replacement policies which are playing an
important role in the web cache. These algorithms are:
A. Least Recently Used (LRU)
Least Recently Used page replacement policy is simple and
easy to use. This algorithm is work on the time-stamp. It
removes the last recently used object that was not used for a
3233
Kapil Arora et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3232 - 3235
longest time when the cache exceeds its maximum size and
put a new object in place of evicted object. In case, if the
cache size is not full then it will insert the object in the cache
memory. For example, let we have 14 page, then the LRU
work as:
7
0
0
7
1
1
0
7
2
1
0
2
0
1
0
2
3
3
0
2
0
3
0
2
4
3
0
4
2
2
0
4
3
2
3
4
0
2
3
0
3
2
3
0
2
2
3
0
1
2
3
1
0
7
1
1
0
7
2
1
0
2
0
1
0
2
3
3
0
2
0
3
0
2
4
3
0
4
2
2
0
4
3
2
0
3
0
2
0
3
3
2
0
3
2
2
0
3
1
2
0
1
FIFO Queue
Newly
Requested
Page
VII.
COMPARISON OF LRU AND LFU
As, we consider the page fault in the case of least recently
used (LRU) and least frequently Used(LFU)
in fig.3 and fig 4.There are 10 page fault in the Least recently
used and there are 9 page fault in the Least Frequently used.
So as we consider the performance, then we can conclude that
LFU perform better than LRU due to the less page fault.
VIII.
PROPOSED SYSTEM
In this, I have included a threshold value i.e. TSD to evict the
historical object which are not been used by the long time. I
have used both recency and frequency in this system. If any
object which are placed in the memory and is not used since
from a long interval of time, then it has to be remove that
object to make a room for the new object.
Frequency division (F.D.) is used to calculate the average
frequency. When a new object is entering, it checks the least
time and check the priority with the frequency division. If the
priority is greater than frequency division than we look for the
second smallest timestamp. If the difference between both
the time-stamp is greater than TSD, then we remove the first
least document else if not, then we calculate the priority with
the frequency division. If the priority is greater than the
frequency division than remove the first document else
second.
3234
Kapil Arora et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3232 - 3235
IX.
CONCLUSION
Web cache is used to increase the performance of the system
by reduce the server load in term of the requested object,
latency time. In this paper, we reviewed the performance
metrics to measure the performance of the cache and
parameters which are meant for evaluate the performance of a
Web cache. The page replacement algorithm categories such
as Recency based, frequency based and functional based and
page replacement policy such as LRU, LFU and the
application related to the web cache has been discussed and
we have concluded that LFU is perform better than the least
recently used (LRU).Our system removed the drawback of
LRU and LFU.
New Item
Check priority of
cache (m)
No
If p[n] <FD
Yes
Yes
Replace cache (m)
with new item
If T.S[n]T.S[m]>TSD
No
No
END
Replace cache[n]
with new item
REFERENCES
S.V.Nagaraj, Web Caching and Its Applications, Kluwer Academic
Publishers, 2004.
[2] Vinit A. Kakde, Prof. Sanjay K.Mishra , Prof. Amit Sinhal, Mahendra
S. MahalleSurvey of Effective Web Cache
Algorithm,
International Journal of Science
and Engineering Investigations,
Volume 1, Issue 1, February 2012.
[3] George Pallis, Athena Vakali, Eythimis Sidiropoulo, FRES-CAR: An
Adaptive Cache Replacement Policy, IEEE, 2005.
[4] K.Geetha, Dr.N.Ammasai Gounden, Monikandan S.,SEMALRU: An
Implementation of modified web cache replacement algorithm, IEEE,
2009.
[5] Abdullah balamash and Marwan krunz,
An Overview of Web
Caching Replacement Algorithms, IEEE Communications Surveys &
Tutorials, 2004, Vol.6 (2).
[6]. Harshal N. Datir, Yogesh H. Gulhane, P.R. Deshmukh,Analysis and
Performance Evalution of web caching algorithms, International
Journal of Engineering Science and Technology, feb.2011.
[7]. Naizheng Bian and Hao chen, (2008), A Least Page Replacement
Algorithm for web Cache Optimization, Workshop on Knowledge
Discovery and Data Mining.
[8]. Brad Whitehead, Chung-Horng Lung, Amogelang Tapela, Gopinath
Sivarajah,(2008), Experiments of Large File Caching and
Comparisons of Cache Algorithms, IEEE international Symposium on
Network Computing and Applications.
[1]
3235