0% found this document useful (0 votes)
21 views

Format of Mini_Project Report

Uploaded by

lkboss694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Format of Mini_Project Report

Uploaded by

lkboss694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

MEERUT INSTITUTE OF ENGINEERING AND TECHNOLOGY, MEERUT

Session: 2022-23

MINI PROJECT REPORT


ON
“Text-to-Speech Converter ”

BACHELOR OF TECHNOLOGY
(COMPUTER SCIENCE AND ENGINEERING- AI)

Submitted to-
“Mr. Ajay Kumar Sah”

Submitted by-
Lavnesh kumar 2000681540029
Anant Soam 2100681540009

7th
SEMESTER
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
(Data Science)

MEERUT INSTITUTE OF ENGINEERING AND TECHNOLOGY, MEERUT


Table of contents
i
Page no.
Description i
Declaration iii

Certificate iv

Acknowledgement v

Chapter 1 Introduction 1

Chapter 2 Objective and Scope 3

Chapter 3 System Design 4

Chapter 4 Methodology 6

Chapter 5 Flow Diagram 8

Chapter 6 Technology Bucket 10

Chapter 7 Result 17

Chapter 8 Conclusion & Future Work 19

Appendices Implementation Of code 21


References

ii
DECLARATION

We hereby declare that the project entitled -“Text-to-Speech Converter ”, which is being
submitted as Mini Project in department of Computer Science and engineering (DS) to Meerut
Institute of Engineering and Technology, Meerut (U.P.) is an authentic record of our genuine
work done under the guidance of Assistant Professor “Ajay Kumar Sah” of Computer Science
and Engineering (DS), Meerut Institute of Engineering and Technology, Meerut.

Date: 6 /12/2024

Anant Soam(2000681540009)

Lavnesh Kumar(2000681540029)

Place: MIET, Meerut.

iii
CERTIFICATE

This is to certify that mini project report entitled “Text-to-Speech Converter ” submitted by
“Lavnesh kumar and Anant Soam” has been carried out under the guidance of mini Project
commitee of Computer Science and Engineering, Meerut Institute of Engineering and
Technology, Meerut. This project report is approved for Mini Project (KCS752) in 7th semester in
“CSE(DS)” from Meerut Institute of Engineering and Technology, Meerut.

Supervisor: Ajay Kumar Sah Mr. Rohit Aggarwal

Assistant professor Head Of Department

Date: 6 /12/2024 CSE(DS)


MIET, MEERUT

iv
ACKNOWLEDGEMENT

We express our sincere indebtedness towards our guide Mini project commitee of Computer
Science and Engineering(DS), Meerut Institute of Engineering and Technology, Meerut for his
valuable suggestion, guidance and supervision throughout the work. Without his king patronage
and guidance the project would not have taken shape. We would also like to express our
gratitude and sincere regards for his kind approval of the project. Time to time counseling and
advises.

We would also like to thank to our HoD Mr. “Rohit Aggarwal”, Department of Computer
Science and engineering (DS), Meerut Institute of Engineering and Technology, Meerut for his
expert advice and counselling from time to time.

We owe sincere thanks to all the faculty members in the department of Computer Science and
engineering (DS) for their kind guidance and encouragement time to time.

Date: 6 /12/2024 Students name:

Lavnesh kumar(2000681540029)

Anant Soam(2000681540009)

v
CHAPTER -1
Introduction

The Text-to-Speech (TTS) Converter project is designed to bridge the gap between
written content and auditory consumption. With the rapid growth of digital content, it
has become increasingly important to ensure that people with visual impairments,
learning disabilities like dyslexia, or those who simply prefer listening to text can
access information effortlessly. The goal of this project is to convert written text into
clear, natural-sounding speech, enhancing accessibility and making content available
to a wider audience.

An essential feature of this tool is its customizability. Users can select between
different voice types (male, female) and accents, while also adjusting the speed
and pitch of the speech to suit their personal preferences. This level of
personalization ensures that the TTS converter can cater to a wide range of users,
from those who need a slower pace to others who prefer a quicker delivery. Such
customization also enhances user satisfaction and improves the overall
experience.

The Text-to-Speech Converter has vast applications across various domains. It


can be used in education to help students with reading difficulties access learning
materials, in the workplace to improve productivity by allowing professionals to
listen to documents while multitasking, and for individuals with visual
impairments to navigate digital content. The system also fosters greater
inclusivity by allowing everyone to access digital information, regardless of their
physical or cognitive abilities.

1
CHAPTER-2
Objective and Scope

The primary objective of the Text-to-Speech Converter is to


create a tool that accurately converts written text into clear, natural-
sounding speech. This system is designed to enhance accessibility
for individuals with visual impairments, dyslexia, or other
reading difficulties, allowing them to listen to written content
easily. By utilizing advanced speech synthesis and natural
language processing techniques, the converter ensures high-
quality audio output that mimics human speech.

Another key objective is to provide users with a customizable experience.


The system allows users to adjust speech speed, tone, and choose from a variety
of voices and accents, ensuring that the speech output meets personal
preferences. The ultimate goal is to create a user-friendly tool that empowers
individuals to consume text-based information in an efficient and engaging
way, while improving overall accessibility and inclusivity.

1.

2
CHAPTER-3
System Design

The Text-to-Speech Converter is built with a modular and scalable architecture


that ensures efficiency, ease of use, and high-quality performance. The core
components of the system include the User Interface (UI), Text Preprocessing
Module, Natural Language Processing (NLP) Engine, Speech Synthesis
Engine, and Customization Options, all working together to seamlessly convert
written text into natural-sounding speech.

1. User Interface (UI)

The User Interface is the entry point for users, providing a simple and intuitive
experience. It features a text input field where users can type or paste content.
The interface includes buttons for play, pause, and stop functionality, along with
customizable options such as speech speed, voice selection, and accent
preference. The UI ensures that the system is accessible to both technical and
non-technical users.

2. Text Preprocessing Module

This module prepares the input text for accurate speech synthesis by performing
tasks such as text normalization, where abbreviations are expanded (e.g., "U.S."
to "United States"), and tokenization, which splits the text into smaller units like
words and sentences. It also handles error correction by removing any special
characters or unrecognized symbols that could impact the quality of the speech.

3. Natural Language Processing (NLP) Engine

The NLP Engine is responsible for understanding the text’s structure and
context. It analyzes sentence syntax, ensures proper pauses by interpreting
punctuation, and resolves ambiguities (e.g., homophones like "lead" and "lead").
This ensures that the synthesized speech is contextually accurate and flows
naturally.
3
4. Speech Synthesis Engine

The Speech Synthesis Engine converts the processed text into speech using
advanced models like WaveNet or Tacotron. These deep learning models
generate natural-sounding voices by analyzing patterns in human speech,
providing high-quality output with proper intonation, rhythm, and expression.
The system supports multiple languages and accents, offering a more
personalized experience for users.

5. Customization Options

To enhance user experience, the system offers customization features such as


voice selection (male or female), speech speed adjustment, and pitch control.
Users can also select different accents (e.g., American, British, Australian).
These options ensure that the TTS output meets individual preferences for a more
tailored and comfortable listening experience.

6. Output Generation

Once the text is converted into speech, users can either listen to it immediately
via real-time playback or download it as an audio file (e.g., MP3 or WAV). The
ability to generate and save audio files enables users to access content offline and
share it with others.

4
CHAPTER-4
Methodology
The methodology for the Text-to-Speech Converter involves a series of steps, from text input to
speech output. The system is built using a combination of pre-processing, Natural Language
Processing (NLP), speech synthesis, and customizability options. Each step of the process is
carefully designed to ensure the production of clear, natural, and accurate speech.
1. Text Input and Preprocessing

The first step in the methodology is to collect the text input from the user. This can be done via typing or
pasting text into the system’s user interface. Once the text is received, the Text Preprocessing Module
processes it by performing essential tasks like text normalization (converting abbreviations into full forms),
tokenization (splitting the text into manageable units such as words or sentences), and error correction
(removing unnecessary characters that might affect pronunciation).

2. Natural Language Processing (NLP)

After preprocessing, the text is passed to the NLP Engine, where it is analyzed for proper context and
syntax. The NLP engine helps in handling issues like pronunciation of homophones (e.g., “lead” vs. “led”)
and determining the correct pause based on punctuation. The engine ensures that the system can handle
complex text, ensuring natural and fluid speech output.

3. Speech Synthesis

The heart of the system lies in the Speech Synthesis Engine, which converts the processed text into speech.
This engine uses deep learning models like WaveNet or Tacotron to generate human-like speech. These
models analyze speech patterns, such as tone, rhythm, and intonation, to produce natural-sounding output.
The system ensures that the synthesized speech closely mimics human voice, offering a smooth listening
experience.

4. Customization Options

The methodology includes allowing users to personalize their experience. The system offers options to
adjust speech speed, pitch, and voice selection (male or female, different accents), providing flexibility to
suit different preferences. These adjustments are applied in real time during speech synthesis.

5. Output Generation

Finally, the system generates the speech output, which can be played immediately or saved as an audio file
(e.g., MP3 or WAV). Users can listen to the content in real time or download it for offline use, ensuring
accessibility in various contexts.

5
6
7
CHAPTER-6
Technology Bucket

3.1 Python
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes
code readability with the use of significant indentation.
Python is dynamically-typed and garbage-collected. It supports multiple programming
paradigms, including structured (particularly procedural), object-oriented and functional
programming. It is often described as a “batteries included” language due to its comprehensive
standard library.
Python’s large standard library provides tools suited to many tasks and is commonly cited as one
of its greatest strengths. For Internet-facing applications, many standard formats and protocols
such as MIME and HTTP are supported. It includes modules for creating graphical user
interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic
with arbitrary- precision decimals, manipulating regular expressions, and unit testing.
Some parts of the standard library are covered by specifications—for example, the Web Server
Gateway Interface (WSGI) implementation wsgiref follows PEP but most are specified by their
code, internal documentation, and test suites. However, because most of the
standard library is cross-platform Python code, only a few modules need altering or rewriting for
variant implementations.
Most Python implementations (including CPython) include a read–eval–print loop (REPL),
permitting them to function as a command line interpreter for which users enter statements
sequentially and receive results immediately.
Python also comes with an Integrated development environment (IDE) called IDLE, which is more
beginner-oriented.
Other shells, including IDLE and Ipython, add further abilities such as improved autocompletion,
session state retention, and syntax highlighting.
As well as standard desktop integrated development environments, there are Web browser-based
IDEs, including Sage Math, for developing science- and math-related programs; PythonAnywhere,
a browser-based IDE and hosting environment; and Canopy IDE, a commercial IDE
emphasizing scientific computing.

3.2 Machine Learning

Machine learning (ML) is a field of inquiry devoted to understanding and building methods that
'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen
as a part of artificial intelligence. Machine learning algorithms build a model based on sample
data, known as training data, in order to make predictions or decisions without being explicitly
programmed to do so. Machine learning algorithms are used in a wide variety of applications,
8
such

9
as in medicine, email filtering, speech recognition, agriculture, and computer vision, where it is
difficult or unfeasible to develop conventional algorithms to perform the needed tasks. A subset
of machine learning is closely related to computational statistics, which focuses on making
predictions using computers, but not all machine learning is statistical learning Data mining is a
related field of study, focusing on exploratory data analysis through unsupervised learning. Some
implementations of machine learning use data and neural networks in a way that mimics the
working of a biological brain. In its application across business problems, machine learning is
also referred to as predictive analytics.
Machine learning and data mining often employ the same methods and overlap significantly, but
while machine learning focuses on prediction, based on known properties learned from the
training data, data mining focuses on the discovery of (previously) unknown properties in the
data (this is the analysis step of knowledge discovery in databases). Data mining uses many
machine learning methods, but with different goals; on the other hand, machine learning also
employs data mining methods as "unsupervised learning" or as a preprocessing step to improve
learner accuracy. Much of the confusion between these two research communities (which do
often have separate conferences and separate journals, ECML PKDD being a major exception)
comes from the basic assumptions they work with: in machine learning, performance is usually
evaluated with respect to the ability to reproduce known knowledge, while in knowledge
discovery and data mining (KDD) the key task is the discovery of previously unknown
knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method
will easily be outperformed by other supervised methods, while in a typical KDD task,
supervised methods cannot be used due to the unavailability of training data.

3.3 Decision Tree Regressor

Decision tree learning is a supervised learning approach used in statistics, data mining and
machine learning. In this formalism, a classification or regression decision tree is used as a
predictive model to draw conclusions about a set of observations.
Tree models where the target variable can take a discrete set of values are called classification
trees; in these tree structures, leaves represent class labels and branches represent conjunctions of
features that lead to those class labels. Decision trees where the target variable can take continuous
values (typically real numbers) are called regression trees. Decision trees are among the most
popular machine learning algorithms given their intelligibility and simplicity.
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and
decision making. In data mining, a decision tree describes data (but the resulting classification tree
can be an input for decision making). Decision tree learning is a method commonly used in data
mining. The goal is to create a model that predicts the value of a target variable based on several
input variables.
A decision tree is a simple representation for classifying examples. For this section, assume that all
of the input features have finite discrete domains, and there is a single target feature called the
"classification". Each element of the domain of the classification is called a class. A decision tree
or
10
a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature.
The arcs coming from a node labeled with an input feature are labeled with each of the possible
values of the target feature or the arc leads to a subordinate decision node on a different input
feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes,
signifying that the data set has been classified by the tree into either a specific class, or into a
particular probability distribution (which, if the decision tree is well-constructed, is skewed
towards certain subsets of classes).
A tree is built by splitting the source set, constituting the root node of the tree, into subsets—which
constitute the successor children. The splitting is based on a set of splitting rules based on
classification features. This process is repeated on each derived subset in a recursive manner called
recursive partitioning. The recursion is completed when the subset at a node has all the same
values of the target variable, or when splitting no longer adds value to the predictions. This process
of top- down induction of decision trees (TDIDT) is an example of a greedy algorithm, and it is by
far the most common strategy for learning decision trees from data.

3.4 Artificial Neural Network

Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are
computing systems inspired by the biological neural networks that constitute animal brains.
An ANN is based on a collection of connected units or nodes called artificial neurons, which
loosely model the neurons in a biological brain. Each connection, like the synapses in a biological
brain, can transmit a signal to other neurons. An artificial neuron receives signals then processes
them and can signal neurons connected to it. The "signal" at a connection is a real number, and the
output of each neuron is computed by some non-linear function of the sum of its inputs. The
connections are called edges. Neurons and edges typically have a weight that adjusts as learning
proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons
may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold.
Typically, neurons are aggregated into layers. Different layers may perform different
transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer
(the output layer), possibly after traversing the layers multiple times. Neural networks learn (or are
trained) by processing examples, each of which contains a known "input" and "result," forming
probability-weighted associations between the two, which are stored within the data structure of
the net itself. The training of a neural network from a given example is usually conducted by
determining the difference between the processed output of the network (often a prediction) and a
target output. This difference is the error. The network then adjusts its weighted associations
according to a learning rule and using this error value. Successive adjustments will cause the
neural network to produce output which is increasingly similar to the target output. After a
sufficient number of these adjustments the training can be terminated based upon certain criteria.
This is known as supervised learning.
Such systems "learn" to perform tasks by considering examples, generally without being
programmed with task-specific rules. For example, in image recognition, they might learn to
identify images that contain cats by analyzing example images that have been manually labeled as
"cat" or
11
"no cat" and using the results to identify cats in other images. They do this without any prior
knowledge of cats, for example, that they have fur, tails, whiskers, and cat-like faces. Instead, they
automatically generate identifying characteristics from the examples that they process.
ANNs are composed of artificial neurons which are conceptually derived from biological neurons.
Each artificial neuron has inputs and produces a single output which can be sent to multiple other
neurons. The inputs can be the feature values of a sample of external data, such as images or
documents, or they can be the outputs of other neurons. The outputs of the final output neurons of
the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the
weights of the connections from the inputs to the neuron. We add a bias term to this sum. This
weighted sum is sometimes called the activation. This weighted sum is then passed through a
(usually nonlinear) activation function to produce the output. The initial inputs are external data,
such as images and documents. The ultimate outputs accomplish the task, such as recognizing an
object in an image. The neurons are typically organized into multiple layers, especially in deep
learning. Neurons of one layer connect only to neurons of the immediately preceding and
immediately following layers. The layer that receives external data is the input layer. The layer
that produces the ultimate result is the output layer. In between them are zero or more hidden
layers. Single layer and unlayered networks are also used. Between two layers, multiple
connection patterns are possible. They can be 'fully connected', with every neuron in one layer
connecting to every neuron in the next layer. They can be pooling, where a group of neurons in
one layer connect to a single neuron in the next layer, thereby reducing the number of neurons in
that layer. Neurons with only such connections form a directed acyclic graph and are known as
feedforward networks. Alternatively, networks that allow connections between neurons in the
same or previous layers are known as recurrent networks. Learning is the adaptation of the
network to better handle a task by considering sample observations. Learning involves adjusting
the weights (and optional thresholds) of the network to improve the accuracy of the result. This is
done by minimizing the observed errors. Learning is complete when examining additional
observations does not usefully reduce the error rate. Even after learning, the error rate typically
does not reach 0. If after learning, the error rate is too high, the network typically must be
redesigned. Practically this is done by defining a cost function that is evaluated periodically during
learning. As long as its output continues to decline, learning continues. The cost is frequently
defined as a statistic whose value can only be approximated. The outputs are actually numbers, so
when the error is low, the difference between the output (almost certainly a cat) and the correct
answer (cat) is small. Learning attempts to reduce the total of the differences across the
observations. Most learning models can be viewed as a straightforward application of optimization
theory and statistical estimation.

3.5 NUMPY

NumPy is a library for the Python programming language, adding support for large,
multidimensional arrays and matrices, along with a large collection of high-

12
level mathematical functions to operate on these arrays. ] The ancestor of NumPy, Numeric, was
originally created by Jim Hugunin with contributions from several other developers. In 2005,
Travis Oliphant created NumPy by incorporating features of the competing Numarray into
Numeric, with extensive modifications. NumPy is open-source software and has many
contributors. NumPy is a NumFOCUS fiscally sponsored project.
NumPy targets the CPython reference implementation of Python, which is a non-
optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often
run much slower than compiled equivalents due to the absence of compiler optimization. NumPy
addresses the slowness problem partly by providing multidimensional arrays and functions and
operators that operate efficiently on arrays; using these requires rewriting some code, mostly inner
loops, using NumPy.
Using NumPy in Python gives functionality comparable to MATLAB since they are both
interpreted, and they both allow the user to write fast programs as long as most operations work on
arrays or matrices instead of scalars. In comparison, MATLAB boasts a large number of additional
toolboxes, notably Simulink, whereas NumPy is intrinsically integrated with Python, a more
modern and complete programming language. Moreover, complementary Python packages are
available; SciPy is a library that adds more MATLAB-like functionality and Matplotlib is a
plotting package that provides MATLAB-like plotting functionality. Internally, both MATLAB
and NumPy rely on BLAS and LAPACK for efficient linear algebra computations.
Python bindings of the widely used computer vision library OpenCV utilize NumPy arrays to store
and operate on data. Since images with multiple channels are simply represented as
threedimensional arrays, indexing, slicing or masking with other arrays are very efficient ways to
access specific pixels of an image. The NumPy array as universal data structure in OpenCV for
images, extracted feature points, filter kernels and many more vastly simplifies the programming
workflow and debugging.

3.6 PANDAS

Pandas is a software library written for the Python programming language for data manipulation
and analysis. In particular, it offers data structures and operations for manipulating numerical
tables and time series. It is free software released under the three-clause BSD license. The name
is derived from the term "panel data", an econometrics term for data sets that include observations
over multiple time periods for the same individuals. Its name is a play on the phrase "Python data
analysis" itself. Wes McKinney started building what would become pandas at AQR Capital
while he was a researcher there from 2007 to 2010.

13
Pandas is a Python library for data analysis. Started by Wes McKinney in 2008 out of a need for a
powerful and flexible quantitative analysis tool, pandas has grown into one of the most popular
Python libraries. It has an extremely active community of contributors.

Pandas is built on top of two core Python libraries—matplotlib for data visualization and NumPy
for mathematical operations. Pandas acts as a wrapper over these libraries, allowing you to access
many of matplotlib's and NumPy's methods with less code. For instance, pandas' .plot() combines
multiple matplotlib methods into a single method, enabling you to plot a chart in a few lines.
Before pandas, most analysts used Python for data munging and preparation, and then switched to
a more domain specific language like R for the rest of their workflow. Pandas introduced two new
types of objects for storing data that make analytical tasks easier and eliminate the need to switch
tools: Series, which have a list-like structure, and Data Frames, which have a tabular structure.

Pandas is mainly used for data analysis and associated manipulation of tabular data in Data
Frames. Pandas allows importing data from various file formats such as comma-separated values,
JSON, Parquet, SQL database tables or queries, and Microsoft Excel. Pandas allows various data
manipulation operations such as merging, reshaping, selecting, as well as data cleaning, and data
wrangling features. The development of pandas introduced into Python many comparable features
of working with Data Frames that were established in the R programming language. The pandas
library is built upon another library NumPy, which is oriented to efficiently working with arrays
instead of the features of working on Data Frames.

3.7 DEEP NEURAL NETWORK

Deep Neural Networks (DNNs) have become a promising solution to inject AI in our daily lives
from self-driving cars, smartphones, games, drones, etc. In most cases, DNNs were accelerated
by server equipped with numerous computing engines, e.g., GPU, but recent technology advance
requires energy-efficient acceleration of DNNs as the modern applications moved down to
mobile computing nodes. Therefore, Neural Processing Unit (NPU) architectures dedicated to
energyefficient DNN acceleration became essential. Despite the fact that training phase of DNN
requires precise number representations, many researchers proved that utilizing smaller bit-
precision is enough for inference with low-power consumption. This led hardware architects to
investigate energy-efficient NPU architectures with diverse HW-SW co-optimization schemes for
inference. This chapter provides a review of several design examples of latest NPU architecture
for DNN, mainly about inference engines. It also provides a discussion on the new architectural
researches of neuromorphic computers and processing-in-memory architecture, and provides
perspectives on the future research directions.

The success of deep learning comes at the cost of very high computational complexity.
Consequently, Internet of Things (IoT) edge nodes typically offload deep learning tasks to
powerful

14
cloud servers, an inherently inefficient solution. In fact, transmitting raw data to the cloud
through wireless links incurs long latencies and high energy consumption. Moreover, pure cloud
offloading is not scalable due to network pressure and poses security concerns related to the
transmission of user data. The straightforward solution to these issues is to perform deep
learning inference at the edge. However, cost and power-constrained embedded processors
with limited processing and memory capabilities cannot handle complex deep learning models.
Even resorting to hardware acceleration, a common approach to handle such complexity,
embedded devices are still not able to directly manage models designed for cloud servers. It
becomes then necessary to employ proper optimization strategies to enable deep learning
processing at the edge.

15
CHAPTER-7
RESULT

16
17
CHAPTER-8
Conclusion & Future Work

At this stage, the system is designed to predict the grade of fresh milk. Analysis of the needs of
IoT based fresh milk grading systems was presented. The use case diagram describes the
functional requirements, as shown. System analysis is interpreted as an individual or
organization that applies analytical methods and techniques (scientific, mathematical, statistical,
financial, political, social, cultural, et cetera) to provide meaningful data to support informed
decision making by missions planners, system operators, and system maintainers.

Unified Modelling Language can analyze and design a smart grading system appropriately. This
modelling provides precise information on each system used. In this stage, the system will be
designed to illustrate functional requirements in the classification smart grading systems. This

18
study

19
agrees with that Unified Modelling Language (UML) is a system development method consisting
of three stages: problem identification, system analysis, and system design. This study described
the activity used to classify the quality of fresh milk. A milk farmer sent fresh milk, and then fresh
milk was tested by specific gravity at cooperation. IoT systems have monitored the temperature of
fresh milk in real-time. The quality data of fresh milk was collected and classified by the ANN
model.

Smart grading using sensor tools aims to improve ease of activity in determining the quality of
fresh milk. One of the advantages of a smart grading system is the information system to
determine the grade of fresh milk per farmer, which helps stakeholders determine the purchase
price of fresh milk. This information system uses machine learning algorithms to predict using
complex data input

20
IMPLEMENTATION CODE USING DECISION TREE REGRESSOR

21
22
23
USING DNN CLASSIFIER MODEL

24
25
26
References
[1] J. Lu, O. Dunkelman, N. Keller, and J. Kim, “New impossible differential attacks,” Lect. Notes
Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics),
vol. 5365 LNCS, pp. 279–293, 2008, doi: 10.1007/978-3-540-89754-5_22.
[2] N. Jeyanthi and R. Thandeeswaran, Security Breaches and Threat Prevention in the Internet of
Things. IGI Global, 2017.
[3] B. Furht, E. Muharemagic, and D. Socek, Multimedia Encryption and Watermarking. Springer
US, 2006.
[4] S. Kumari, “A research Paper on Cryptography Encryption and Compression Techniques,” Int.
J. Eng. Comput. Sci., vol. 6, no. 4 SE-Articles, Apr. 2017, [Online]. Available:
http://www.ijecs.in/index.php/ijecs/article/view/3630.
[5] N. K. Pareek, V. Patidar, and K. K. Sud, “Image encryption using chaotic logistic map,” Image
Vis. Comput., vol. 24, no. 9, pp. 926–934, 2006, doi: 10.1016/j.imavis.2006.02.021.
[6] S. Patel, K. P. Bharath, and R. K. Muthu, “Image Encryption Decryption Using Chaotic
Logistic Mapping and DNA Encoding,” arXiv, 2020.
[7] K. N. H. Bhat and A. N. Sharma, “Image Encryption and Decryption using Chaotic Key
Sequence Generated by Sequence of Logistic Map and Sequence of States of Linear Feedback
Shift.”
[8] M. K. Mandal, G. D. Banik, D. Chattopadhyay, and D. Nandi, “An image encryption process
based on Chaotic logistic map,” IETE Tech. Rev. (Institution Electron. Telecommun. Eng.
India), vol. 29, no. 5, pp. 395–404, 2012, doi: 10.4103/0256- 4602.103173

27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy