0% found this document useful (0 votes)
238 views33 pages

An Intelligent Web-Based Voice Chat Bot

The document discusses developing an intelligent web-based voice chat bot using speech recognition and artificial intelligence techniques. It would allow users to interact with the system using voice commands instead of typing. The system would use neural networks and speech recognition algorithms to understand users and generate responses in a synthesized voice. The goal is to create a more natural conversational interface that is accessible to all users.

Uploaded by

python developer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
238 views33 pages

An Intelligent Web-Based Voice Chat Bot

The document discusses developing an intelligent web-based voice chat bot using speech recognition and artificial intelligence techniques. It would allow users to interact with the system using voice commands instead of typing. The system would use neural networks and speech recognition algorithms to understand users and generate responses in a synthesized voice. The goal is to create a more natural conversational interface that is accessible to all users.

Uploaded by

python developer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

AN INTELLIGENT WEB-BASED VOICE CHAT BOT

ABSTRACT
Speech recognition is technology that can recognize spoken words, which can then

be converted to text. A subset of speech recognition is voice recognition, which is

the technology for identifying a person based on their voice. Facebook, Amazon,

Microsoft, Google and Apple — five of the world’s top tech companies — are

already offering this feature on various devices through services like Google

Home, Amazon Echo and Siri.

Speech recognition or speech to text includes capturing and digitizing the

sound waves, transformation of basic linguistic units or phonemes, constructing

words from phonemes and contextually analyzing the words to ensure the correct

spelling of words that sounds the same. Approach: Studying the possibility of

designing a software system using one of the techniques of artificial

intelligence applications neuron networks where this system is able to distinguish

the sound signals and neural networks of irregular users. Fixed weights are

trained on those forms first and then the system gives the output match

for each of these formats and high speed. The proposed neural network

study is based on solutions of speech recognition tasks, detecting signals

using angular modulation and detection of modulated techniques.

Speech recognition or speech to text includes capturing and digitizing the

sound waves, transformation of basic linguistic units or phonemes, constructing

words from phonemes and contextually analyzing the words to ensure the correct
spelling of words that sounds the same. Approach: Studying the possibility of

designing a software system using one of the techniques of artificial

intelligence applications neuron networks where this system is able to distinguish

the sound signals and neural networks of irregular users. Fixed weights are

trained on those forms first and then the system gives the output match

for each of these formats and high speed. The proposed neural

network study is based on solutions of speech recognition tasks, detecting

signals using angular modulation and detection of modulated techniques.

Speech recognition or speech to text includes capturing and digitizing the sound

waves, transformation of basic linguistic units or phonemes, constructing words

from phonemes and contextually analyzing the words to ensure the correct spelling

of words that sounds the same. Approach: Studying the possibility of designing a

software system using one of the techniques of artificial intelligence applications

neuron networks where this system is able to distinguish the sound signals and

neural networks of irregular users. Fixed weights are trained on those forms first

and then the system gives the output match for each of these formats and high

speed. The proposed neural network study is based on solutions of speech

recognition tasks, detecting signals using angular modulation and detection of

modulated techniques.
Most of the works done till today on the field of IVR system has been primarily

focused on the input mechanisms based on the keyboard or touch pad. In such

cases it is tedious to provide the input command every time through typing of texts.

This way of providing input to the computer system may be enhanced if we could

provide direct speech input instead of typing. This enables in fast interaction

between the system and user and therefore increases overall satisfaction of the

customers. This also increases the speed of access of the information from the

system.

INTRODUCTION

Using voice commands has become pretty ubiquitous nowadays, as more mobile

phone users use voice assistants such as Siri and Cortana, and as devices such as

Amazon Echo and Google Home have been invading our living rooms. These

systems are built with speech recognition software that allows their users to issue

voice commands. Now, our web browsers will become familiar with to Web

Speech API, which allows users to integrate voice data in web apps.

With the current state of web apps, we can rely on various UI elements to interact

with users. With the Web Speech API, we can develop rich web applications with

natural user interactions and minimal visual interface, using voice commands. This

enables countless use cases for richer web applications. Moreover, the API can
make web apps accessible, helping people with physical or cognitive disabilities or

injuries. The future web will be more conversational and accessible!

This project creates an artificial intelligence (AI) voice chat interface in the

browser. The app will listen to the user’s voice and reply with a synthetic voice.

OBJECTIVES

The prime objective of the project being proposed is to design and build a system

that a basic user can interact so that she/he can make use of voice commands to

deal with system i.e. making a system that has capability of recognizing the

isolated speaker words and process the request to forward the given task. The

typical objectives are listed below:

• To make use of domain specific models and algorithms in field of speech

recognition.

• To develop an interactive voice response system along with speech recognition

attribute.

• To understand the basics of speech processing.

• To get knowledge on various speech recognition approaches.

• To get insights on speech responsive application development.


LITERATURE REVIEW

Using voice is introduced; creating and catering for a more personal and

convenient experience. The process of an online chat system would follow a client

server approach which acquires the signal and streams it to a server. The input

voice is then processed and a response is generated. This process places a large

processing requirement on the server s processor and memory resources. This

limitation is even more evident when a large number of users are to be

simultaneously accommodated on the system. Voice recognition requires a two

part process of capturing and analysis of an input signal [3]. While the client

utilizes the operating system for an input mechanism to acquire a signal, it is for

the client to interpret the signal. This process can alleviate processing from the

server and allow the server to generate responses faster than when it has more

voice processing requirements. Server response generation can be broken down

into two categories: data retrieval and information output. The core focus of this

paper is to improve the information output by generating a response that is relevant

to the request, factual and personal. This requires aspects of news and an intelligent

algorithm to generate informative and user specific responses.


CHATBOTS CONSIST OF THREE MAJOR COMPONENTS:

The user interface, an interpreter and a database. Laven defines Chatbot as a

platform that efforts to simulate typed discussion, with the goal of at least

provisionally tricking the social into thoughtful they were speaking to other person.

Actually, chat-bot is a conversational agent that cooperates with operators for a

given topic using the natural language. Till date several chat-bots have been

organised on the internet for the determination of education, consumer service site,

supervision, entertaining, etc. The famous existing chat-bots are ALICE , Siri and

Ok Google. The AI based chat-bots are famous because they are light weight, easy

to configure as well as at low cost. In our paper, we are going to have an

application for college purpose which will provide all the information related to

college and student queries. Firstly the bot analyzes user triggered message to the

chat-bot program, then according it matches reply from the MySQL database, the

answer is formulated and send back to the user. Students must select the category

listed in a drop down fashion having various options such as admission, faculty

details, syllabus, exams etc. Hence, this will avoid student s direct enquiry to

college. If any new applicant enquirers for admission and the particulars about any

section of the college this bot will assistance to get the answer of enquiry of the

applicant. The chat-bots that are currently been live in market uses text, voice and

emotion intelligence as the input. In this paper, we have used the text as user input.
If the present proposes need to be improved, we have to provide some options. For

the same, we restart from the basics. There is always need to rethink about the

fundamental abilities on which intelligence works.

a) Arithmetic The power to compute is the fundamental of intelligence. It contains

arithmetic processes like addition, subtraction, division and so on. Today s

machineries do well on this portion. They can help carry out even complex

calculations within no time.

b) Comparison, Logic and Reasoning. The choice of AI becomes wider when a

structure has the capability to apply logic and make Assessments.

The web service processes all received queries using the response generation

module (based on the Artificial Linguistic Internet Computing Entity (ALICE)

system), which makes use of a data repository. The data repository is updated by

the content retrieval module to increase the intelligence autonomously (based on an

Artificial Intelligence Markup Language (AIML)).

IBM has had a prominent role within speech recognition since its inception,

releasing of “Shoebox” in 1962. This machine had the ability to recognize 16

different words, advancing the initial work from Bell Labs from the 1950s.

However, IBM didn’t stop there, but continued to innovate over the years,

launching VoiceType Simply Speaking application in 1996. This speech


recognition software had a 42,000-word vocabulary, supported English and

Spanish, and included a spelling dictionary of 100,000 words. While speech

technology had a limited vocabulary in the early days, it is utilized in a wide

number of industries today, such as automotive, technology, and healthcare. Its

adoption has only continued to accelerate in recent years due to advancements in

deep learning and big data.

METHODOLOGY

Artificial Intelligence

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike

the natural intelligence displayed by humans and animals, which involves

consciousness and emotionality. The distinction between the former and the latter

categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled

as AGI (Artificial General Intelligence) while attempts to emulate 'natural'

intelligence have been called ABI (Artificial Biological Intelligence). Leading AI

textbooks define the field as the study of "intelligent agents": any device that

perceives its environment and takes actions that maximize its chance of

successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is

often used to describe machines (or computers) that mimic "cognitive" functions
that humans associate with the human mind, such as "learning" and "problem

solving".

As machines become increasingly capable, tasks considered to require

"intelligence" are often removed from the definition of AI, a phenomenon known

as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done

yet." For instance, optical character recognition is frequently excluded from things

considered to be AI, having become a routine technology. Modern machine

capabilities generally classified as AI include successfully understanding human

speech, competing at the highest level in strategic game systems (such

as chess and Go), autonomously operating cars, intelligent routing in content

delivery networks, and military simulations.

Artificial intelligence was founded as an academic discipline in 1955, and in the

years since has experienced several waves of optimism, followed by

disappointment and the loss of funding (known as an "AI winter"), followed by

new approaches, success and renewed funding. After Alpha Go successfully

defeated a professional Go player in 2015, artificial intelligence once again

attracted widespread global attention. For most of its history, AI research has been

divided into sub-fields that often fail to communicate with each other. These sub-

fields are based on technical considerations, such as particular goals (e.g.

"robotics" or "machine learning"), the use of particular tools ("logic" or artificial


neural networks), or deep philosophical differences. Sub-fields have also been

based on social factors (particular institutions or the work of particular

researchers). The traditional problems (or goals) of AI research include reasoning,

knowledge representation, planning, learning, natural language processing,

perception and the ability to move and manipulate objects. General intelligence is

among the field's long-term goals. Approaches include statistical methods,

computational intelligence, and traditional symbolic AI. Many tools are used in AI,

including versions of search and mathematical optimization, artificial neural

networks, and methods based on statistics, probability and economics. The AI field

draws upon computer science, information engineering, mathematics, psychology,

linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so

precisely described that a machine can be made to simulate it". This raises

philosophical arguments about the mind and the ethics of creating artificial beings

endowed with human-like intelligence. These issues have been explored by myth,

fiction and philosophy since antiquity. Some people also consider AI to be a

danger to humanity if it progresses unabated. Others believe that AI, unlike

previous technological revolutions, will create a risk of mass unemployment.

In the twenty-first century, AI techniques have experienced a resurgence following

concurrent advances in computer power, large amounts of data, and theoretical


understanding; and AI techniques have become an essential part of the technology

industry, helping to solve many challenging problems in computer science,

software engineering and operations research

Speech Recognition

Speech recognition, also known as automatic speech recognition (ASR), computer

speech recognition, or speech-to-text, is a capability which enables a program to

process human speech into a written format. While it’s commonly confused with

voice recognition, speech recognition focuses on the translation of speech from a

verbal format to a text one whereas voice recognition just seeks to identify an

individual user’s voice.

Key features of effective speech recognition

Many speech recognition applications and devices are available, but the more

advanced solutions use AI and machine learning. They integrate grammar, syntax,

structure, and composition of audio and voice signals to understand and process

human speech. Ideally, they learn as they go — evolving responses with each

interaction.
The best kind of systems also allow organizations to customize and adapt the

technology to their specific requirements — everything from language and nuances

of speech to brand recognition. For example:

 Language weighting: Improve precision by weighting specific words that

are spoken frequently (such as product names or industry jargon), beyond

terms already in the base vocabulary.

 Speaker labeling: Output a transcription that cites or tags each speaker’s

contributions to a multi-participant conversation.

 Acoustics training: Attend to the acoustical side of the business. Train the

system to adapt to an acoustic environment (like the ambient noise in a call

center) and speaker styles (like voice pitch, volume and pace).

 Profanity filtering: Use filters to identify certain words or phrases and

sanitize speech output.

Meanwhile, speech recognition continues to advance. Companies, like IBM, are

making inroads in several areas, the better to improve human and machine

interaction.
Speech recognition algorithms

The vagaries of human speech have made development challenging. It’s

considered to be one of the most complex areas of computer science – involving

linguistics, mathematics and statistics. Speech recognizers are made up of a few

components, such as the speech input, feature extraction, feature vectors, a

decoder, and a word output. The decoder leverages acoustic models, a

pronunciation dictionary, and language models to determine the appropriate output.

Speech recognition technology is evaluated on its accuracy rate, i.e. word error rate

(WER), and speed. A number of factors can impact word error rate, such as

pronunciation, accent, pitch, volume, and background noise. Reaching human

parity – meaning an error rate on par with that of two humans speaking – has long

been the goal of speech recognition systems.

Various algorithms and computation techniques are used to recognize speech into

text and improve the accuracy of transcription. Below are brief explanations of

some of the most commonly used methods:

 Natural language processing (NLP): While NLP isn’t necessarily a specific

algorithm used in speech recognition, it is the area of artificial intelligence

which focuses on the interaction between humans and machines through


language through speech and text. Many mobile devices incorporate speech

recognition into their systems to conduct voice search—e.g. Siri—or

provide more accessibility around texting.

 Hidden markov models (HMM): Hidden Markov Models build on the

Markov chain model, which stipulates that the probability of a given state

hinges on the current state, not its prior states. While a Markov chain

model is useful for observable events, such as text inputs, hidden markov

models allow us to incorporate hidden events, such as part-of-speech tags,

into a probabilistic model. They are utilized as sequence models within

speech recognition, assigning labels to each unit—i.e. words, syllables,

sentences, etc.—in the sequence. These labels create a mapping with the

provided input, allowing it to determine the most appropriate label

sequence.

 N-grams: This is the simplest type of language model (LM), which assigns

probabilities to sentences or phrases. An N-gram is sequence of N-words.

For example, “order the pizza” is a trigram or 3-gram and “please order the

pizza” is a 4-gram. Grammar and the probability of certain word sequences

are used to improve recognition and accuracy.

 Neural networks: Primarily leveraged for deep learning algorithms, neural

networks process training data by mimicking the interconnectivity of the


human brain through layers of nodes. Each node is made up of inputs,

weights, a bias (or threshold) and an output. If that output value exceeds a

given threshold, it “fires” or activates the node, passing data to the next

layer in the network. Neural networks learn this mapping function through

supervised learning, adjusting based on the loss function through the

process of gradient descent. While neural networks tend to be more

accurate and can accept more data, this comes at a performance efficiency

cost as they tend to be slower to train compared to traditional language

models.

 Speaker Diarization (SD)Speaker diarization algorithms identify and

segment speech by speaker identity. This helps programs better distinguish

individuals in a conversation and is frequently applied at call centers

distinguishing customers and sales agents.

Speech recognition use cases

A wide number of industries are utilizing different applications of speech

technology today, helping businesses and consumers save time and even lives.

Some examples include:

Automotive: Speech recognizers improves driver safety by enabling voice-

activated navigation systems and search capabilities in car radios.


Technology: Virtual assistants are increasingly becoming integrated within our

daily lives, particularly on our mobile devices. We use voice commands to access

them through our smartphones, such as through Google Assistant or Apple’s Siri,

for tasks, such as voice search, or through our speakers, via Amazon’s Alexa or

Microsoft’s Cortana, to play music. They’ll only continue to integrate into the

everyday products that we use, fueling the “Internet of Things” movement.

Healthcare: Doctors and nurses leverage dictation applications to capture and log

patient diagnoses and treatment notes.

Sales:Speech recognition technology has a couple of applications in sales. It can

help a call center transcribe thousands of phone calls between customers and

agents to identify common call patterns and issues. Cognitive bots can also talk to

people via a webpage, answering common queries and solving basic requests

without needing to wait for a contact center agent to be available. It both instances

speech recognition systems help reduce time to resolution for consumer issues.

Security: As technology integrates into our daily lives, security protocols are an

increasing priority. Voice-based authentication adds a viable level of security.

PYTHON
Python is an interpreted, high-level, general-purpose programming

language.Created by Guido van Rossum and first released in 1991, Python design

philosophy emphasizes code readability with its notable use of significant

whitespace. Its language constructs and object-oriented approach aim to help

programmers write clear, logical code for small and large-scale projects. Python is

dynamically typed and garbage-collected. It supports multiple programming

paradigms, including procedural, object-oriented, and functional programming.

Python is often described as a "batteries included " language due to its

comprehensive standard library. Features of Python programming language

IMPLEMENTATION

Flask Framework is used to implement the Project.

Flask Python web app framework

A framework "is a code library that makes a developer's life easier when building

reliable, scalable, and maintainable web applications" by providing reusable code

or extensions for common operations. There are a number of frameworks for

Python, including Flask, Tornado, Pyramid, and Django.

Flask is a micro web framework written in Python. It is developed by Armin

Ronacher, who leads an international group of Python enthusiasts named Pocco.

Flask is based on the Werkzeug WSGI toolkit and Jinja2 template engine. Both are
Pocco projects.It is classified as a microframework because it does not require

particular tools or libraries.It has no database abstraction layer, form validation, or

any other components where pre-existing third-party libraries provide common

functions. However, Flask supports extensions that can add application features as

if they were implemented in Flask itself. Extensions exist for object-relational

mappers, form validation, upload handling, various open authentication

technologies and several common framework related tools. Applications that use

the Flask framework include Pinterest and LinkedIn.

WSGI

Web Server Gateway Interface (WSGI) has been adopted as a standard for Python

web application development. WSGI is a specification for a universal interface

between the web server and the web applications.

Werkzeug

It is a WSGI toolkit, which implements requests, response objects, and other

utility functions. This enables building a web framework on top of it. The Flask

framework uses Werkzeug as one of its bases.

Jinja2

Jinja2 is a popular templating engine for Python. A web templating system

combines a template with a certain data source to render dynamic web pages.
Flask is often referred to as a micro framework. It aims to keep the core of an

application simple yet extensible. Flask does not have built-in abstraction layer for

database handling, nor does it have form a validation support. Instead, Flask

supports the extensions to add such functionality to the application. Some of the

popular Flask extensions are discussed later in the tutorial.

To build the web app, we’re going to take three major steps:

1. Use the web speech website Speech Recognition interface to listen to the user’s

voice.

2. Apply Speech to Text Converter.

3. According to the Speech to Text Converter,website returns response.

DATA FLOW DIAGRAM

Data-flow diagrams (DFDs) model a perspective of the system that is most readily

understood by users – the flow of information through the system and the activities

that process this information. Data-flow diagrams provide a graphical


representation of the system that aims to be accessible to computer specialist and

non-specialist users alike. The models enable software engineers, customers and

users to work together effectively during the analysis and specification of

requirements. Although this means that our customers are required to understand

the modeling techniques and constructs, in data-flow modeling only a limited set of

constructs are used, and the rules applied are designed to be simple and easy to

follow. These same rules and constructs apply to all data-flow diagrams (i.e., for

each of the different software process activities in which DFDs can be used).

Benefits of data-flow diagrams

Data-flow diagrams provide a very important tool for software engineering, for a

number of reasons:

• The system scope and boundaries are clearly indicated on the diagrams (more

will be described about the boundaries of systems and each DFD later in this

chapter).

• The technique of decomposition of high level data-flow diagrams to a set of more

detailed diagrams, provides an overall view of the complete system, as well as a

more detailed breakdown and description of individual activities, where this is

appropriate, for clarification and understanding.

Elements of data-flow diagrams


Four basic elements are used to construct data-flow diagrams:

• Processes

• Data-flows

• Data stores

• External entities

The rest of this section describes each of the four elements of DFDs, in terms of

their purpose, how the element is notated and the rules associated with how the

element relates to others in a diagram. Notation and software A number of

different notations exist for depicting these elements, although it is only the shape

of the symbols which vary in each case, not the underlying logic. This unit uses the

Select SSADM notation in the description and construction of data-flow diagrams.

As data-flow diagrams are not a part of the UML specification, ArgoUML and

Umbrello do not support their creation. However, Dia is free software available for

both Windows and Ubuntu which does support data-flow diagrams.

Processes

Purpose

Processes are the essential activities, carried out within the system boundary, that

use information. A process is represented in the model only where the information
which provides the input into the activity is manipulated or transformed in some

way, so that the data-flowing out of the process is changed compared to that which

flowed in. The activity may involve capturing information about something that

the organisation is interested in, such as a customer or a customer's maintenance

call. It may be concerned with recording changes to this information, a change in a

customer's address for example. It may require calculations to be carried out, such

as the quantity left in stock following the allocation of stock items to a customer's

job; or it may involve validating information, such as checking that faulty

equipment is covered by a maintenance contract.

Notation

Processes are depicted with a box, divided into three parts. Figure.

The notation for a process.The top left-hand box contains the process number. This

is simply for identification and reference purposes, and does not in any way imply

priority and sequence. The main part of the box is used to describe the process

itself, giving the processing performed on the data it receives. The smaller

rectangular box at the bottom of the process is used in the Current Physical Data-
Flow Diagram to indicate the location where the processing takes place. This may

be the physical location — the Customer Services Department or the Stock Room,

for example. However, it is more often used to denote the staff role responsible for

performing the process. For example, Customer Services, Purchasing, Sales

Support, and so on.

Rules

The rules for processes are:

• Process names should be an imperative verb specific to the activity in question,

followed by a pithy and meaningful description of the object of the activity. Create

Contract, or Schedule Jobs, as opposed to using very general or non-specific verbs,

such as Update Customer Details or Process Customer Call.

• Processes may not act as data sources or sinks. Data flowing into a process must

have some corresponding output, which is directly related to it. Similarly, data-

flowing out of a process must have some corresponding input to which it is directly

related.

• Normally only processes that transform system data are shown on data-flow

diagrams. Only where an enquiry is central to the system is it included.


• Where a process is changing data from a data store, only the changed information

flow to the data store (and not the initial retrieval from the data store) is shown on

the diagram.

• Where a process is passing information from a data store to an external entity or

another process, only the flow from the data store to the process is shown on the

diagram.

Data-flows

Purpose

A data-flow represents a package of information flowing between two objects in

the data-flow diagram. Data-flows are used to model the flow of information into

the system, out of the system, and between elements within the system.

Occasionally, a data-flow is used to illustrate information flows between two

external entities, which is, strictly speaking, outside of the system boundaries.

However, knowledge of the transfer of information between external entities can

sometimes aid understanding of the system under investigation, in which case it

should be depicted on the diagram.

Notation
A data-flow is depicted on the diagram as a directed line drawn between the

source and recipient of the data-flow, with the arrow depicting the direction of

flow. Figure.

Notation for a data-flow The directed line is labelled with the data-flow name,

which briefly describes the information contained in the flow. This could be a

Maintenance Contract, Service Call Details, Purchase Order, and so on. Data-flows

between external entities are depicted by dashed, rather than unbroken, lines.

Data stores

Purpose

A data store is a place where data is stored and retrieved within the system. This

may be a file, Customer Contracts file for example, a catalogue or reference list,

Options Lists for example, a log book such as the Job Book, and so on.

Notation

A data store is represented in the data-flow diagram by a long rectangle,

containing two locations.


The small left-hand box is used for the identifier, which comprises a numerical

reference prefixed by a letter. The main area of the rectangle is labelled with the

name of the data store. Brief names are chosen to reflect the content of the data

store.

Rules

The rules for representing data stores are:

• One convention that could be used is to determine the letter identifying a data

store by the store's nature.

• “M” is used where a manual data store is being depicted.

• “D” is used where it is a computer based data store.

• “T” is used where a temporary data store is being represented.

• Data stores may not act as data sources or sinks. Data flowing into a data store

must have some corresponding output, and vice versa.


• Because of their function in the storage and retrieval of data, data stores often

provide input data flows to receive output flows from a number of processes.

For the sake of clarity and to avoid crisscrossing of data-flows in the data-flow

diagram, a single data store may be included in the diagram at more than one point.

Where the depiction of a data store is repeated in this way, this is signified by

drawing a second vertical line along the left-hand edge of the rectangle for each

occurrence of the data store.

External entities

Purpose

External entities are entities outside of the system boundary which interact with the

system, in that they send information into the system or receive information from

it. External entities may be external to the whole organisation — as in Customer

and Supplier in our running example; or just external to the application area where

users' activities are not directly supported by the system under investigation.

Accounts and Engineering are shown as external entities as they are recipients of

information from the system. Sales also provide input to the system. External

entities are often referred to as sources and sinks. All information represented
within the system is sourced initially from an external entity. Data can leave the

system only via an external entity. Notation

External entities are represented on the diagram as ovals drawn outside of the

system boundary, containing the entity name and an identifier.

Names consist of a singular noun describing the role of the entity. Above the label,

a lower case letter is used as the identifier for reference purposes.

Rules

The rules associated with external entities are:

• Each external entity must communicate with the system in some way, thus there

is always a dataflow between an external entity and a process within the system.

• External entities may provide and receive data from a number of processes. It

may be appropriate, for the sake of clarity and to avoid crisscrossing of data flows,

to depict the same external entity at a number of points on the diagram. Where this

is the case, a line is drawn across the left corner of the ellipse, for each occurrence
of the external entity on the diagram. Customer is duplicated in this way in our

example.
EXPERIMENTAL RESULTS

In order to implement the dynamic functionality, we used flask framework to

create a website and implemented speech recognition and responses through the

website.
SYSTEM REQUIREMENTS

Windows 10

Minimum 4GB RAM

Software Language :python

CONCLUSION

Thus we implemented a website based chat-bot that attempts to improve User

Interaction with the website. We created a website using flask framework. The

chat-bot has a stored set of responses, takes dynamic user input as speech into

account and thus tends to provide relevant responses.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy