WEB BROWSERS+search Engine
WEB BROWSERS+search Engine
Some browsers can also be used to save information resources to file systems.
FEATURES:
Support HTTP Secure and offer quick and easy ways to delete the web cache, cookies,
and browsing history.
HISTORY OF RELEASE OF VARIOUS WEB BROWSERS:
▪ WorldWideWeb, February 26, 1991
▪ Mosaic, April 22, 1993
▪ Netscape Navigator and Netscape Communicator, October 13, 1994
▪ Internet Explorer 1, August 16, 1995
▪ Opera, 1996, see History of the Opera web browser
▪ Mozilla Navigator, June 5, 2002[17]
▪ Safari, January 7, 2003
▪ Mozilla Firefox, November 9, 2004
▪ Google Chrome, September 2, 2008
FUNCTIONING OF WEB BROWSER:
1. Input URL. 2. The prefix of the URI determines how the URI will be interpreted. 3.
identifies a resource to be retrieved over the Hypertext Transfer Protocol (HTTP)
(REFER WWW functioning) 4. HTML is passed to the browser's layout engine to be
transformed from markup to an
interactive document.
RICHA GUPTA Page 1
A web browser or Internet browser is a software application for retrieving, presenting,
and traversing information resources on the World Wide Web.
Used to access information provided by Web servers in private networks or files in file
systems.
Allow the user to open multiple information resources at the same time
Display a list of web pages that the user has bookmarked so that the user can quickly
return to them
Have these user interface elements
SEARCH ENGINES
The search results are generally presented in a list of results and are often called hits.
A search engine operates, in the following order
1. Web crawling 2. Indexing 3. Searching
WEB CRAWLING
The spider will begin with a popular site, indexing the words on its pages and following
every link found within the site. In this way, the spidering system quickly begins to
travel, spreading out across the most widely used portions of the Web
INDEXING:
An index has a single purpose: It allows information to be found as quickly as possible.
There are quite a few ways for an index to be built, but one of the most effective ways is
to build a hash table.
SEARCHING:
RICHA GUPTA Page 2
Once the spiders have completed the task of finding information the search engine must
store the information in a way that makes it useful. There are two key components
involved in making the gathered data accessible to users:
o The information stored with the data o The method by which the information is indexed
Searching through an index involves a user building a query and submitting it through
the search engine. The query can be quite simple, a single word at
A search engine employs special software robots, called spiders, to build lists of the
words found on Web sites.
When a spider is building its lists, the process is called Web crawling.
In order to build and maintain a useful list of words, a search engine's spiders have to
look at a lot of pages. For this, the usual starting points are lists of heavily used servers
and very popular pages.
minimum. Building a more complex query requires the use of Boolean operators
(AND,NOT,OR,FOLLOWED BY) that allow you to refine and extend the terms of the
search..
TYPES OF SEARCH ENGINES
1. Crawler-based search engines
RICHA GUPTA Page 3
Crawler-based search engines, such as Google (http://www.google.com), create their
listings automatically
1. The web server sends the query to the index
servers. 3. The search results are returned
The content inside the index servers is similar to
the to the user in a fraction of a
index in the back of a book - it tells which pages
second.
contain the words that match the query.
2. The query travels to the doc servers, which actually retrieve the stored documents.
Snippets are generated to describe each search result.
2. Human-powered directories
A human-powered directory, such as the Open Directory Project
(http://www.dmoz.org/about.html) depends on humans for its listings. (Yahoo!, which
used to be a directory, now gets its information from the use of crawlers.) A directory
gets its information from submissions, which include a short description to the directory
for the entire site, or from editors who write one for sites they review. A search looks for
matches only in the descriptions submitted. Changing web pages, therefore, has no
effect on how they are listed. Techniques that are useful for improving a listing with a
search engine have nothing to do with improving a listing in a directory. The only
exception is that a good site, with good content, might be more likely to get reviewed for
free than a poor site.
3. Hybrid search engines
Today, it is extremely common for crawler-type and human-powered results to be
combined when conducting a search. Usually, a hybrid search engine will favor one
type of listings over
RICHA GUPTA Page 4
another. For example, MSN Search (http://www.imagine-
msn.com/search/tour/moreprecise.aspx) is more likely to present human-powered
listings from LookSmart (http://search.looksmart.com/). However, it also presents
crawler-based results, especially for more obscure queries.
RICHA GUPTA Page 5