0% found this document useful (0 votes)
34 views29 pages

Final Seminar Report

The document discusses touchless touchscreen technology and compares it to traditional touchscreen technology. It provides details about how touchless touchscreens work using optical sensors and hand gesture recognition without physical contact. The document also mentions potential applications of touchless touchscreen technology.

Uploaded by

jaanramrohit1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views29 pages

Final Seminar Report

The document discusses touchless touchscreen technology and compares it to traditional touchscreen technology. It provides details about how touchless touchscreens work using optical sensors and hand gesture recognition without physical contact. The document also mentions potential applications of touchless touchscreen technology.

Uploaded by

jaanramrohit1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

A COMPARATIVE STUDY ON TOUCHSCREEN AND

TOUCHLESS TOUCHSCREEN TECHNOLOGY

TECHNICAL SEMINAR REPORT

Submitted by
B.Nishitha
(19R11A1205)
in partial fulfillment for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

GEETHANJALI COLLEGE OF ENGINEERING AND TECHNOLOGY

Cheeryal (V), Keesara (M), Medchal Dist, Hyderabad– 501 301


(Affiliated to Jawaharlal Nehru Technological University, Hyderabad, accredited by NAAC and NBA,
New Delhi)

2019-2023
ACKNOWLEDGEMENT

I would like to thank the Geetanjali College of Engineering and Technology for giving me the

opportunity to do an Technical Seminar within the organization. I also would like all the people

that worked along with me Geetanjali College of Engineering and Technology with their

patience and openness they created an enjoyable working environment. It is indeed with a great

sense of pleasure and immense sense of gratitude that I acknowledge the help of these individuals.

I am highly indebted to Principal Dr. Udaya Kumar Susarla for the facilities provided to

accomplish this Technical Seminar. I would like to thank my Head of the Department Dr.

K.Srinivas for his constructive criticism throughout my Technical Seminar

With regards,
B.NISHITHA
(19R11A1205)

ii
GEETHANJALI COLLEGE OF ENGINEERING AND TECHNOLOGY
Department of INFORMATION TECHNOLOGY

CERTIFICATE

This is to certify that the technical seminar report titled A Comparative Study On Touchscreen
And Touchless Touchscreen Technology being submitted by B.Nishitha , bearing roll number
19R11A1205 , in partial fulfillment of the requirements for the award of the Degree of Bachelor
of Technology in information technology is a record of bonafide work carried out under
Geetanjali College of Engineering and Technology guidance and supervision.

Examiner Dr.K.Srinivas rao


Name:
Designation:

iii
DECLARATION

This is to Certify that the project work entitled “TOUCHLESS TOUCHSCREEN


TECHNOLOGY” submitted to JNTUH in partial fulfillment of the requirement for
the award of the Degree of Bachelor of Technology (B.Tech), is an original work
carried out by B.NISHITHA (19R11A1205), under the guidance of Mr.P.Manohar ,
Associate Professor in the Department of Information Technology. This matter
embodied in this project is a genuine work, done by the students and has not been
submitted whether the university or to any other university/Institute for the fulfillment
of the requirement of any course of study.

B.NISHITHA (19R11A1205)

iv
ABSTRACT

It was the touch screens which initially created great furore.Gone are the days when
you have to fiddle with the touch screens and end scratching up. Touch screen displays
are ubiquitous worldwide.Frequent touching a touchscreen display with a pointing
device such as a finger can result in the gradual de-sensitization of the touchscreen to
input and can ultimately lead to failure of the touchscreen. To avoid this a simple user
interface for Touchless control of electrically operated equipment is being developed.
EllipticLabs innovative technology lets you control your gadgets like Computers, MP3
players or mobile phones without touching them. A simple user interface for Touchless
control of electrically operated equipment. Unlike other systems which depend on
distance to the sensor or sensor selection this system depends on hand and or finger
motions, a hand wave in a certain direction, or a flick of the hand in one area, or
holding the hand in one area or pointing with one finger.
The device is based on optical pattern recognition using a solid state optical matrix
sensor with a lens to detect hand motions. This sensor is then connected to a digital
image processor, which interprets the patterns of motion and outputs the results as
signals to control fixtures, appliances, machinery, or any device controllable through
electrical signals.

v
vi
vii
viii
CONTENTS

S.NO PAGE NO

1. Introduction 1

2. Literature Survey 2

3. Methodology

3.1. Existing System 3

3.2. Proposed System 5


4. Motivation 7
5. Block diagram 7
6. Algorithms Used 8
7. Technology Used 8
8. Results and Discussion 8
8. Applications 9

9. Conclusion 11

10. Future Enchancement 12

11. Bibliography 13

ix
LIST OF FIGURES :

FIG.NO FIGURE NAME PAGE.NO


3.1.1 Current interfaces in cars distract drivers from the concentration 3

3.1.2 Workflow of multi-sensor system 4

3.2.1 Optical matrix sensor, matrix of pixels, photodiode 5

3.2.2 Hand poses along with labels 6

3.2.3 Dynamic poses of the image along with labels 6

4.1 Touchless Monitor 7

4.2 Touch Wall 8

4.3 Touchless User Interface 8

4.4 Touchless Software Development Kit 9

x
xi
1. INTRODUCTION

The touch less touch screen sounds like it would be nice and easy, however after closer
examination it looks like it could be quite a workout. This unique screen is made by TouchKo ,
White Electronics Designs , and Groupe 3D. The screen resembles the Nintendo Wii without the
Wii Controller. With the touchless touch screen your hand doesn’t have to come in contact with the
screen at all, it works by detecting your hand movements in front of it. This is a pretty unique and
interesting invention, until you break out in a sweat. Now this technology doesn’t compare to the
hologram-like IO2 Technologies Heliodisplay M3 , but thats for anyone that has $18,100 laying
around.

You probably wont see this screen in stores any time soon. Everybody loves a touch screen and
when you get a gadget with touch screen the experience is really exhilarating. When the I-phone
was introduced,everyone felt the same.But gradually,the exhilaration started fading. While using
the phone with the finger tip or with the stylus the screen started getting lots of finger prints and
scratches. When we use a screen protector; still dirty marks over such beautiful glossy screen is a
strict no-no. Same thing happens with I-pod touch.

1
2. LITERATURE SURVEY

 An approach for action identification and localization in accordance with multi-ratio and multi-
modal deep learning is utilized by N. Neverova. Each optical modality captures spatial
instruction at this spatial scale, and the entire arrangement operates at two physical scales. This
approach is usually a priming approach that exploits i) igniting of individual methods; and ii)
progressive blending of procedures deriving out of most powerful to weakest cross-modality
formation. In hand motion identification, a range of prominent image characteristics is well-
known in way for segregating mannerism. Identification of gestures, starting with presence,
spatiotemporal filming characteristics is used for segregating expression. These characteristics
after which pre-owned inside a number allotment frameworks for recognition of gesture labels.

 P. Molchanov employs Convolutional Deep Neural Networks to fuse data originating at more
than one sensor and also to segregate signaling. Here, this algorithm distinguishes the images
efficiently which are captured indoors and outdoors of the car during daylight and the darkness.
It consumes lesser power than any other techniques. Hand positioning primarily used in
graphical interfaces like in cars can lessen foresight and intellectual interruption, may also get
better assurance and luxury.

 D. Wu proposes a semi-supervise ranked progressive framework. According to the Hidden


Markov Model (HMM) that's recommended for a mutual gesture segregation and identification
where skeletal club report, along with deep and RGB images, are the multimodal testimony
observations. This method learns unusual spatiotemporal representations which use Neural
Networks fitted to the input data technique.

2
3. METHODOLOGY

3.1 EXISTING SYSTEM

Current interfaces don't get in contact with the touchless graphical user interface. So the car
proprietor has to sensibly touch the buttons to administer or perhaps come in contact with the
touchscreen arrangement within the car. This ends up in interference of one's car owner and feature
limited concentration on driving. Further this can result in casualties.Organizing multiple sensors
for accurate and strength-coherent progressive car owner, fist movement identification having
substitute tracking system, colour 35mm, and also a deep camera, that in combination perform the
process physically powerful opposed to fluctuating light setting. There is an operation to unitedly
evaluate the tracking system and sensors.

Researchers hire CDNN for integrating the inputs starting with more than one sensor and to
sequence the gestures. It consumes moderately fewer management than essential vision-based
arrangements. Also RGB-D image is just a sequence of a RGB image with depth images. So, depth
image is definitely a figure channel in where every single pixel pertains to a size nclosed by the
image plane and the comparable object inside the RGB drawing. A Kinect can be used to pick up
such RGB-D drawings. If using Kinect like metalware isn't handy under the user’s charge, then it
needs to predict the depth originating figures of the equivalent culture occupied starting with a
couple of video pictures, coming right down to a mainframe vision complication. Another result is
to bring together training picture consist of RGB-D figures and to use neural network techniques.

Fig 3.1.1: Current interfaces in cars distract drivers from the concentration

3
WORKING:

The existing organization creates an effortlessly organized situation, where enterprise is continually
monitored and usually distributed, definitely to the topics which are realized. The arrangement uses
quadruplet calibrated cameras equipped within the suite which is being monitored and also a body-
mounted mobile accelerometer on each individual, exploiting the characteristics of different
sensors to widen recognition certainty, get better scalability and reliability. The algorithm on which
the system relies, in addition its network, are aimed towards analyzing and classifying convoluted
movements (prefer walking, sitting, jumping, running, collapsing, etc.) of likely a couple of other
people simultaneously. Here, we characterize a preliminary appeal, wherein claimed coordination
is usually aimed toward possibly fatal activities of every individual within the situation. If similar
work is located and if the individual “at risk” is suited up with the accelerometer, the system
localizes as well as activates it to gain information after which performs a further reliable ease
discovery having in particular prepared classifier.
When the car owner makes hand gestures in front of the multi-sensor system, it is captured in
different sensors like optical camera, time-of-flight depth, radar. This also includes the kinect.
Then using Deep Neural Network the images get classified and the output is obtained as gesture
classification. Finally, images are classified as behavior understanding and interaction images.

Fig 3.1.2: Workflow of multi-sensor system

4
DRAW BACKS:

The flaws in the system led to flourishing of the proposed system requires multiple sensors and the
sensors may capture blur images when there is fast hand gestures made.

3.2 PROPOSED SYSTEM:

Touchless touchscreen technology works as seen in the figure 3.2.1. This process senses the gestures
made in front of the user interface as in our example, music system interface in car.

Fig 3.2.1: Optical matrix sensor, matrix of pixels, photodiode

Fist motions made facing the sensors are recorded by the cameras hence, these images are
recognized. Then, light enters the optical matrix detector and hits its matrix. Signals are processed
to provide output to the devices. Sensor generates the electric signals. Light sensitive diodes placed
inside the sensor transmute light hitting the sensor into charge. Electrical waves are transformed to
generate the desired results on a particular touchless screen. The optical matrix sensor is used
where it senses the 3-dimensional gestures.
Pixels matrix is present in each one of the sensors used. For engulfing charge storage regions light
sensitive diodes are paired with the detectors. This is also called the charge reading component.
User makes gestures and hand postures which is captured in the form of video sequence by the
sensor. A single recognizable image can be seen as shown in the figure 3.2.2.
5
Fig 3.2.2 Hand poses along with labels

Single gesture or posture creates 16 samples in the database. Each image in the sample is different
from each other. The sequence of arrangement of the images from the beginning till the end makes
the sensor discover the kind of action made. Then, accordingly the action is labeled. As the type of
gesture or posture is found further the action is implemented on the system. These sequences are
seen in figure 3.2.3. For example: In the automotive system like car, phone calls can be attended
through the music system interface. While driving if the driver gets a call the person should either
stop the car and answer the phone call or ignore it, as it is risky to attend the call while driving. To
overcome one such case, gesture recognition and hand posture recognition techniques can be
implemented.

Fig 3.2.3: Dynamic poses of the image along with labels

6
4.MOTIVATION

The coronavirus pandemic has triggered efforts to make lifestyle more adaptable to pandemic scenarios.
Hence, technologies like these will help reduce the risk of viruses spreading particularly in public places
where touchscreens are there like on self-service kiosks, ATMs and vending machines. Touchless
technology is a form of electronics control technology that makes it possible for users to control a
digital system without any form of physical contact. The technology enables computer systems to take
inputs/instructions in the form of physical movement, facial patterns, voice, and user behavior.

5.BLOCK DIAGRAM

6.ALGORITHMS USED

1. Convolutional Neural Network (CNN) : A CNN is a kind of network architecture for deep
learning algorithms and is specifically used for image recognition and tasks that involve the
processing of pixel data. CNNs are particularly useful for finding patterns in images to
recognize objects, classes, and categories. They can also be quite effective for classifying
7
audio, time-series, and signal data.
2. Long-Term Recurrent Convolutional Neural Network (LRCNN) algorithm: The Long-term
Recurrent Convolutional Network (LRCN) is proposed by Jeff Donahue et al. in 2016. It is a
combination of CNN and RNN, end-to-end trainable and suitable for large-scale visual
understanding tasks such as video description, activity recognition and image captioning.
The main idea is to use a combination of CNNs to learn visual features from video frames and
LSTMs to transform a sequence of image embeddings into a class label, sentence, probabilities,
or whatever you need.

7.TECHNOLOGY USED

 Optical Pattern Recognition


 Uses Solid State Optical Matrix Sensor
 Sensor is connected to Digital Image Processor
 Output are obtained as Signals

8. RESULTS AND DISCUSSION

The image shown in the figure 10 indicates the stages of image segregation to identify the hand gesture.
Captured video contains sequence of images of hand pose; the best image is taken from the sequence of
image. To select the best image the hand posture and hand gesture algorithms are used. Then it is
carried on to the HGR dataset where different postures of the images selected are put in the database.
Finally, clear image from this dataset is obtained. The Histogram of Oriented Gradient feature is formed
from the image selected in the HGR dataset. This gesture formation is found and it is labeled
accordingly. Hence, the user interface responds back to the user by the gesture he/she made.
9.APPLICATIONS
8
TOUCHLESS MONITOR:

The touch-less display is intended for uses where the mouse is ineffective or when touch is
problematic, such as for surgeons wearing surgical gloves. This TouchKo monitor was recently
demonstrated at the CeB by White Electronic Designs and Tactyl Services. Capacitive sensors on
the display can detect movements up to 15- 20cm distant from the screen, and the software
converts these gestures into screen command.

3.1. Touchless Monitor

TOUCH WALL:

It is made up entirely of touch screen hardware. The Touch wall's related software, Plex, is based
on a regular version of Windows Vista. Touch wall and Plex have a lot in common with Microsoft
Surface, a multitouch table computer that was first introduced in 2007 and has only lately gone
commercial. It's mostly available in select AT&T stores. The simple mechanical mechanism is also
substantially less expensive to manufacture. Three infrared lasers scan a surface to create a touch
wall. When something breaks through the laser line, a camera records it and feeds it back to the
Plex program. On a cardboard screen, earlier prototypes were created. The Plex interface is
displayed on the cardboard using a projector, and the solution works perfectly. Touch wall isn't the
first iPhone multi-touch product. In addition to the Surface, there are several early prototypes in the
works in this

9
arena. What Microsoft has accomplished with a few hundred dollars worth of reality-based gear is
nothing short of amazing.

Fig 4.2 :Touch Wall

It's also evident that the projector's sole true limit, which is the entire wall, can be simply converted
into a multi-touch user interface. Instead of using whiteboards in the office, turn any flat surface
into a touch display

TOUCHLESS USER INTERFACE:

The basic concept is straightforward there would be sensors arrayed around the perimeter of the
device capable of sensing finger movements in 3-D space. The user would be able to use his or her
fingers similarly to a touch phone, but without having to touch the screen, which is why it's so
fascinating.

Fig 4.3 Touchless User Interface

10
Touch interaction and mouse input will not be the only widely recognized ways for clients to
interact with interfaces in the future, according to future technologies and studies in human-
computer interaction. In the future, there will be less touching. These new technologies will allow
companies and varieties to create new forms of media and interfaces to attract their customers'
attention (and imagination). They'll make it easier for people to interact with their products and
media in new ways, boosting brand exposure, adoption, and sales.

TOUCHLESS SOFTWARE DEVELOPMEMT KIT:

The term SDK refers to a software development kit. Typically, it is a collection of software
development tools. It allows users to augment applications with extra functionality, ads, push
notifications, and other features for a specific software package, software framework, hardware
platform, computer system, video game, or similar development platform. The Touch-less SDK
for.NET applications is an open-source SDK. It allows programmers to construct multitouch
applications that use a webcam for input. Color-based markers defined by the user are tracked, and
their data is published to SDK clients through events. "Touch without touching" is now possible.

Fig 4.4 :Touchless Software Development Kit

11
ADVANTAGES:

The device would last for a long time and is simple and easy to use.

 Because the screen is touchless, a transparent image is always visible.


 Because commands are accepted via sensors such as verbal or hand gestures, the GUI
necessitates the use of lie space. As a result, the touch area is reduced, and the text quality on
the screen improves.
 No screen desensitization required.
 Suitable for people with physical disabilities.

DISADVANTAGES:

 High-resolution cameras are required. (HD).


 The public’s interaction must be monitored.
 Image processing is extremely sensitive to noise (lens aberrations).
 The initial cost is very high.
 Used in a sophisticated environment.
 Needs very high-speed image processing (S/W,H/W)

12
10.CONCLUSION

The purpose of this report is to provide an overview of touch-less touch screen technology. The
user gains flexibility in how they use the system by utilizing this technology. The maintenance of
touch screen displays can be reduced by utilizing this technology. Because an external sensing
element, such as a camera, is not required, it is an extremely low-power solution. The level of fault
detection is also quite low, making the method small and interesting. Because the market for
touchless and gesture recognition is expected to increase dramatically, implementation will be
critical.Computers, cell phones, webcams, laptops, and other electronic gadgets can all benefit from
touchless screen technology. Perhaps, after a few years, our bodies will be transformed into a
virtual mouse, virtual keyboard, or input device. While the gadget has potential, it appears that the
API that supports it is not yet able to comprehend the entire spectrum of sign language. The
controller may now be used to recognize basic signs with some effort, but it is not suitable for
complex signs, particularly those that require extensive face or body contact. Because of the
significant rotation and line-of-sight obstruction of digits during the discussion, signs become
erroneous and indistinguishable, rendering the controller (at this time) unsuitable for
communication. When dealing with signs as single entities, however, there is the possibility of
training them in Artificial Neural Networks.

13
11.FUTURE ENHANCEMENT

A. New technology to check virus spread

Through a printing approach, Bengaluru scientists have reported an affordable solution for
developing a low-cost touch-cum-proximity sensor. The researchers stated their activities were
motivated by a desire to limit the risk of infections spreading, particularly in public settings where
touchscreens on self-service kiosks, ATMs, and vending machines are unavoidable.

B. Glamos

Glamos is a unique touchless touchscreen technology that can transform almost any screen. It
detects movement and provides a signal to the gadget, instructing it on how to respond. The user
does not need to touch the device instead, the user must touch the air within a 180-degree radius.
You can interact with the touchscreen from a distance if you use this technology with an existing
touchscreen. You can swipe the air to clean the device and, more importantly, keep yourself safe. A
smart television may be controlled without a remote by simply tapping the air that corresponds to
the location on the screen. This can also be used for multitasking. If you don't want to touch your
laptop, tablet, or smartphone because your hands are unclean, simply air touch the screen. Glamos
can detect motion within a three-foot radius, and a revolving mirror provides additional coverage.
This gadget is compatible with all smart televisions, as well as Android, iOS, and all PCs running
Windows 7 and higher.

14
12.BIBLIOGRAPHY

1. D. Wu, L. Pigou, P. J. Kindermans, N. D. H. Le, L. Shao, J. Dambre, and J. M. Odobez, “Deep


dynamic neural networks for multimodal gesture segmentation and recognition,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 38, no. 8, pp. 1583–1597, 2016.

2. V. John, A. Boyali, S. Mita, M. Imanishi, and N. Sanma, “DeepLearning based fast hand gesture
recognition using representative frames,” in DICTA, 2016.

3. N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout, “Multi-scale deeplearning for gesture


detection and localization,” in ECCV 2014Workshops, 2015.

4. P. Molchanov, S. Gupta, K. Kim, and K. Pulli, “Multi-sensor system fordriver’s hand-gesture


recognition,” inFGR, 2015.

5. J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S.Venugopalan, K. Saenko, and T.


Darrell, “Long-term recurrentconvolutional networks for visual recognition and description,” in
CVPR, 2015

15
16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy