INTERNSHIP REPORT-vivek Payla
INTERNSHIP REPORT-vivek Payla
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
By
Under
Supervision of
IoTIoT.in
2022-2023
1
G.L. BAJAJ INSTITUTE OF TECHNOLOGY & MANAGEMENT,
GREATER NOIDA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
2
Certificate of Internship
3
ACKNOWLEDGEMENT
I also would like to acknowledge all the people that worked along with me
IoTIoT.in, IoTIoT Innovation Lab -3 IoT C.O.E, C.O.E.P's Bhau Institute
of Entrepreneurship and Leadership Beside COEP Boat Club C.O.E.P.
Shivajinagar, Pune, Maharashtra 411005 with their patience and openness
they created an enjoyable working environment.
I would like to thank Mr. Megh Singhal, Internship coordinator for his
support and advise to get and complete internship in above said organization.
Vivek Payla
(1901920100331)
4
ABSTRACT
In recent years’ machine learning is playing a vital role in our everyday lifelike,
it can help us to route somewhere, find something for what we aren’t aware of,
or can schedule appointments in seconds. Looking at the other side of the coin
besides machine learning Mobile phones are equivocating and competing in the
same field. If we take an optimistic view, by applying machine learning in our
mobile devices, we can make our lives better and even move society forward.
Image Classification is the most common and trending topic of machine
learning. Among several different types of models in deep learning,
MobileNetV2 Quantised Pre-trained Models have intimated high performance
on image classification which are made out of various handling layers to gain
proficiency with the portrayals of information with numerous unique levels, are
the best AI models as of late. Here, we have used a Pre-trained Quantised
MobileNetV2 and completed the experiments on the dataset called ImageNet
and image classification, and Run inference on System and finally Deployed on
Brainypi.
5
INDEX
1. Introduction 7
1.1 Object 8
2. Machine Learning 9
2.1 Supervised Learning 9
2.2 Unsupervised Learning 9
3. Deep Learning 10
3.1 Neural Networking 10
4. Tensorflow 11
4.1 Tensorflow Lite 11
4.2 Tensorflow Lite Inference 12
4.2.1 Load & Run model in Python 13
5. MobileNet 14
5.1 Model Comparison 14
6. Tools & Technology Used 15
6.1 Python 15
6.2 Google Colab 15
6.3 VsCode for Python 16
6.4 Command Prompt 16
6.5 Brainypi 16
7. Project Based On My Learning 17
7.1 Implementation On Brainypi 18
8. Conclusion 19
9. References 20
6
1. INTRODUCTION
7
1.1 OBJECTIVES
● To implementing code
By learning how to code in Python it is easy to
Run inference for different images and we can make
Amendments according to ourselves and check how
The accuracy differs.
8
2. MACHINE LEARNING
Machine learning tasks are typically classified into two broad categories,
depending on whether there is a learning "signal" or "feedback" available to a
learning system
9
3. DEEP LEARNING
Deep learning is a subset of machine learning. Usually, when people use the
term deep learning, they are referring to deep artificial neural networks, and
somewhat less frequently to deep reinforcement learning.
10
4. TENSORFLOW
TensorFlow is a free and open-source software library for machine learning and
artificial intelligence. It can be used across a range of tasks but has a particular
focus on training and inference of deep neural networks
Its flexible architecture allows for the easy deployment of computation across a
variety of platforms (CPUs, GPUs), and from desktops to clusters of servers to
mobile and edge devices.
The special format model can be deployed on edge devices like mobiles using
Android or iOS or Linux based embedded devices like Raspberry Pi or
Microcontrollers to make the inference at the Edge.
11
4.2 TensorflowLite Inference
The term inference refers to the process of executing a TensorFlow Lite model on-
device in order to make predictions based on input data. To perform an inference with
a TensorFlow Lite model, you must run it through an interpreter. The TensorFlow
Lite interpreter is designed to be lean and fast. The interpreter uses a static graph
ordering and a custom (less-dynamic) memory allocator to ensure minimal load,
initialization, and execution latency.
1.Loading a model
You must load the .tflite model into memory, which contains the model's
execution graph.
2. Transforming data
Raw input data for the model generally does not match the input data format
expected by the model. For example, you might need to resize an image or
change the image format to be compatible with the model.
3.Running inference
This step involves using the TensorFlow Lite API to execute the model. It
involves a few steps such as building the interpreter, and allocating tensors, as
described in the following sections.
4.Interpreting output
When you receive results from the model inference, you must interpret the
tensors in a meaningful way that's useful in your application.
12
4.2.1 Load and run a model in Python
The Python API for running an inference is provided in the tf.lite module. From
which, you mostly need only tf.lite.Interpreter to load a model and run an inference.
The following example shows how to use the Python interpreter to load a .tflite file
and run inference with random input data
Code :
13
5. MobileNet
MobileNets are CNNs, which means that they learn parameters in convolutional
kernels that are convolved across their inputs. This approach allows the network to
identify features that may indicate ‘person’-ness, ‘car’-ness and ‘neither’-ness. The
use of CNNs instead of fully connected network means the model is robust to
translations of objects in images, it maintains an explicit hierarchical representation of
features, and contains fewer parameters. The last point is the most important in this
use case as it allows high performance with fewer parameters to store on the edge
device. MobileNets futher computational efficiency by using depthwise convolutions
combined with pointwise convolutions, which allow the model to store intermediate
results, and improve their inference performance.
We also present the final sizes of the unquantised and quantised models for
MobileNetV1 and MobileNetV2. Since data augmentation does not change the
number of parameters in a model and is not used at inference time, there is no
difference in size between models that use data augmentation and their
counterparts.We also observe that data augmentation barely improves the performance
of MobileNetV1. This can be explained by observing that the model variances are
low, so data augmentation is unlikely to help as generalising is not a problem. On the
other hand, our MobileNetV1 results indicate a high bias issue. We decided to
increase the size of the model (and change architecture) to MobileNetV2. We observe
that using the larger MobileNetV2 without data augmentation presents a 1.3%
improvement over the smaller MobileNetV1 without data augmentation.
14
6. Tools & Technology used
Various tools and technology were used during our internship training.
Some of them are listed below:
6.1 PYTHON
Python is a high-level, interpreted, interactive and object-oriented scripting
language. Python is designed to be highly readable. It uses English keywords
frequently where as other languages use punctuation, and has fewer syntactical
constructions than other languages.
Python is interpreted: Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is
similar to Perl and PHP.
Python is Interactive: You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
Colab is a free Jupyter notebook environment that runs entirely in the cloud.
Most importantly, it does not require a setup and the notebooks that you create
can be simultaneously edited by your team members - just the way you edit
documents in Google Docs. Colab supports many popular machine learning
libraries which can be easily loaded in your notebook.
As a programmer, you can perform the following using Google Colab.
• Write and execute code in Python
• Document your code that supports mathematical equations
• Create/Upload/Share notebooks
• Import/Save notebooks from/to Google Drive
• Import/Publish notebooks from GitHub
• Import external datasets e.g. from Kaggle
• Integrate PyTorch, TensorFlow, Keras, OpenCV
• Free Cloud service with free GPU
15
6.3 VS CODE FOR PYTHON
working with Python in Visual Studio Code, using the Microsoft Python
extension, is simple, fun, and productive. The extension makes VS Code an
excellent Python editor, and works on any operating system with a variety of
Python interpreters. It leverages all of VS Code's power to provide auto
complete and IntelliSense, linting, debugging, and unit testing, along with the
ability to easily switch between Python environments, including virtual and
conda environments.
A command prompt is the input field in a text-based user interface screen for an
operating system (OS) or program. The prompt is designed to elicit an action.
The command prompt consists of a brief text string followed by a blinking
cursor, which is where the user types command prompt commands.
Command-line interfaces (CLI) and prompts were the standard interface for
computers from the early days of computing into the 1980s. Microsoft MS-DOS
systems and other early consumer-based computers used CLIs. Current
Windows systems offer the CLI for administrative tasks. The CLI is still an
essential part of the Linux OS.
6.5 BRAINYPI
An Enterprise grade device for AI on Edge and IoT needs.It is a betterment over
Raspberrypi, more information is not provided because of privacy.
16
7. Project Based On My Learning
VS CODE Output.
VS CODE Output.
17
7.1 Implementation on Brainypi Server
18
8. CONCLUSION
19
9. REFRENCES
• https://www.tensorflow.org/lite/guide/inference
• https://www.tensorflow.org/lite/models/trained
• https://helloworld.co.in/article/image-classification-tensorflow-lite
• https://levelup.gitconnected.com/custom-image-classification-model-
using-tensorflow-lite-model-maker-68ee4514cd45
• https://androidapps-development-blogs.medium.com/image-
classification-android-app-with-tensorflow-lite-for-beginner-
a793655f5a0a
• https://www.tensorflow.org/lite/models/modify/model_maker
20