ICDSMLA 2023 Vol 1 Survey Paper (With Conference Title)
ICDSMLA 2023 Vol 1 Survey Paper (With Conference Title)
Amit Kumar
Vinit Kumar Gunjan
Sabrina Senatore
Yu-Chen Hu Editors
Proceedings of the
5th International
Conference on Data
Science, Machine
Learning and
Applications; Volume 1
ICDSMLA 2023, 15–16 December,
Hyderabad, India
Lecture Notes in Electrical Engineering 1273
Series Editors
Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of
Napoli Federico II, Napoli, Italy
Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México,
Coyoacán, Mexico
Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München,
Germany
Shanben Chen, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai,
China
Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore,
Singapore, Hong Kong
Rüdiger Dillmann, University of Karlsruhe (TH) IAIM, Karlsruhe, Germany
Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China
Gianluigi Ferrari, Dipartimento di Ingegneria dell’Informazione, Sede Scientifica Università degli Studi di
Parma, Parma, Italy
Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid,
Madrid, Spain
Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, USA
Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing,
China
Janusz Kacprzyk, Intelligent Systems Laboratory, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Alaa Khamis, Department of Mechatronics Engineering, German University in Egypt El Tagamoa El
Khames, New Cairo City, Egypt
Torsten Kroeger, Intrinsic Innovation, Mountain View, USA
Yong Li, College of Electrical and Information Engineering, Hunan University, Changsha, China
Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, USA
Ferran Mart́n, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra,
Spain
Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore
Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany
Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, USA
Subhas Mukhopadhyay, School of Engineering, Macquarie University, Sydney, New Zealand
Cun-Zheng Ning, Department of Electrical Engineering, Arizona State University, Tempe, China
Toyoaki Nishida, Department of Intelligence Science and Technology, Kyoto University, Kyoto, Japan
Luca Oneto, Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of
Genova, Genova, Italy
Bijaya Ketan Panigrahi, Department of Electrical Engineering, Indian Institute of Technology Delhi,
New Delhi, India
Federica Pascucci, Department di Ingegneria, Università degli Studi Roma Tre, Roma, Italy
Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing,
China
Gan Woon Seng, School of Electrical and Electronic Engineering, Nanyang Technological University,
Singapore, Singapore
Joachim Speidel, Institute of Telecommunications, University of Stuttgart, Stuttgart, Germany
Germano Veiga, FEUP Campus, INESC Porto, Porto, Portugal
Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Haidian District Beijing, China
Walter Zamboni, Department of Computer Engineering, Electrical Engineering and Applied Mathematics,
DIEM—Università degli studi di Salerno, Fisciano, Italy
Kay Chen Tan, Department of Computing, Hong Kong Polytechnic University, Kowloon Tong, Hong Kong
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the
latest developments in Electrical Engineering—quickly, informally and in high
quality. While original research reported in proceedings and monographs has
traditionally formed the core of LNEE, we also encourage authors to submit books
devoted to supporting student education and professional training in the various fields
and applications areas of electrical engineering. The series cover classical and emerging
topics concerning:
For general information about this book series, comments or suggestions, please contact
leontina.dicecco@springer.com.
To submit a proposal or request further information, please contact the
Publishing Editor in your country:
China
Jasmine Dou, Editor (jasmine.dou@springer.com)
India, Japan, Rest of Asia
Swati Meherishi, Editorial Director (Swati.Meherishi@springer.com)
Southeast Asia, Australia, New Zealand
Ramesh Nath Premnath, Editor (ramesh.premnath@springernature.com)
USA, Canada
Michael Luby, Senior Editor (michael.luby@springer.com)
All other Countries
Leontina Di Cecco, Senior Editor (leontina.dicecco@springer.com)
** This series is indexed by EI Compendex and Scopus databases. **
Amit Kumar · Vinit Kumar Gunjan ·
Sabrina Senatore · Yu-Chen Hu
Editors
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Singapore Pte Ltd. 2025
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors
or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Power Saving Mechanism for Street Lights System Using IoT . . . . . . . . . . . . . . 164
K. V. Dhanalakshmi, G. Naga Mallika, E. Sai Sruthi, V. Prathika,
R. Susmitha, and G. Bhargavi
Spatial and Temporal Analysis of Land Use and Land Cover (LU/LC)
Analysis by Supervised Classification of Landsat Data . . . . . . . . . . . . . . . . . . . . . 290
Yedla Suneetha and M. Anji Reddy
Predicting the Need for Mental Treatment Across Various Age Groups
Using Machine Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
Surya Kant Pal, Himani Rawat, Rita Roy, Ameer Lareb Khan,
Nisha Kumari, and Prashasya Srivastava
xii Contents
Clustering Mixed Data: Bridging the Gap with Deep Learning . . . . . . . . . . . . . . 695
Harini Yerra, Siddartha Kommu, B. Vijay Kumar, and Rachana Sudam
Detecting Credit Card Theft with Various Machine Learning Methods . . . . . . . . 721
G. Sravani, Ganesh B. Regulwar, G. Sairam, M. Nikitha,
Ch. Sowmya, and Bhaskerreddy Kethireddy
IoT Enabled Smart Street Light and Air Quality Control . . . . . . . . . . . . . . . . . . . 843
Aby K. Thomas, Himanshu Shekhar, L. Bhagyalakshmi,
Sanjay Kumar Suman, R. Sreelakshmy, and Amudala Bhasha
Motor Control in Smart Home Using Raspberry Pi and Node Red . . . . . . . . . . . 854
M. Prasanna and I. V. Subba Reddy
Iot Grounded Anti Theft Flooring Security System Using Raspberry Pi . . . . . . . 1200
J. R. V. Jeny, K. Nikhil Goud, Divya Sai Leela Amrutha,
and Kishore Azmira
1 Introduction
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025
A. Kumar et al. (Eds.): ICDSMLA 2023, LNEE 1273, pp. 801–810, 2025.
https://doi.org/10.1007/978-981-97-8031-0_85
802 B. Prashanthi et al.
computer vision and machine learning, this project aims to explore the development of a
system that converts eye blink movements into plain text—a transformative concept with
far-reaching implications for assistive technology [13] and human-computer interaction.
The advent of eye-tracking technology has paved the way for precise and unobtrusive
monitoring of eye movements. This technology allows for the capture of subtle blink
patterns, which, when properly analyzed [1], can be translated into meaningful com-
mands or text. By recognizing the unique signatures of blinks, such as their duration,
frequency, and intensity, we can create a system that not only interprets these patterns but
also enables users to communicate fluently through their natural eye movements. This
project seeks to address a critical need within the field of assistive technology. We will
outline the comprehensive pipeline of our computer vision project, encompassing data
acquisition through state-of-the-art eye-tracking hardware, preprocessing of eye move-
ment data [4], machine learning model development, blink pattern recognition, and the
mapping of these patterns to plain text. Furthermore, we will emphasize the importance
of user-friendliness, adaptability, and the practicality of our system, ensuring that it can
cater to the unique needs and preferences of its users.
2 Literature Survey
Eye blink detection and its application in human-computer interaction have garnered
significant interest in recent years. Researchers and engineers have explored various
techniques and methodologies to harness the potential of eye blink movements for com-
munication and control. In this section, we review key studies and advancements in the
field, providing insights into the evolution of eye blink-based systems.
Lou Gehrig’s disease, commonly known as amyotrophic lateral sclerosis (ALS) [10],
is a fatal condition in which motor neurons are selectively damaged. As organ malfunc-
tion develops, patients eventually lose their movement. Patients eventually experience
difficulty performing even tiny movements and making simple sounds. Researchers have
concentrated on eye movements to interact with quadriplegic individuals because the eye
is the sole moving organ for ALS patients [10]. They have looked on ways to detect eye-
blinks using either cameras or brainwaves, as well as various techniques for choosing
letters on a screen based on eye movements tracked by eye-tracking cameras. The iden-
tification of intent is frequently erroneous when using brainwave-based approaches,
which look at electrical signals from eye movements to infer a patient’s intent. It’s also
possible to employ a camera-based technique that integrates letter selection to find mov-
ing eye feature points; this technique makes it straightforward to ascertain a patient’s
[10] intentions using a predetermined decision-making process. However, it takes a
while to process and is prone to errors in either the sequential selection processes or
the implementation of Morse code that is distributed across all alphabets. We developed
iMouse-sMc, a streamlined Morse code-based user interface [15] paradigm using an eye
mouse, to allow communication with these individuals quicker and easier. Additionally,
we enhanced the eye mouse’s detecting capabilities by utilizing image contrast methods,
allowing us to communicate with patients even at night. [16] This thesis demonstrates
that eye flashes are among the most reliable forms of communication in contemporary
human PC involvement systems. This research proposes a novel method for recognizing
Morse Code Encryption: Securing Visual Information 803
eye flashes, based on format planning and close measurements. Before the extraction
of the eye format, face disclosure is employed to prevent the fake acknowledgment that
would otherwise result from the shifting configuration in the video plot. Because the
connection score typically alters whenever a glimmer occurs, Eye Blink acknowledg-
ment is conducted by that score. The method offers a general accuracy of 92 percent and,
taking into account, a precision of 99 percentile with a 1 percent false positive rate in a
variety of early conditions. In recent years, human PC cooperation has seen a significant
increase in the usage of eye Blink localization methods.
For people with motor neuron diseases, eye-based communication languages like
Blink-To-Speak [11] are essential for conveying their needs and feelings. The majority
of developed eye-based monitoring technologies are expensive and complicated for low-
income nations. For patients with speech problems, Blink-To-Live is an eye-tracking
device based on a modified Blink-To-Speak language and computer vision. Real-time
video frames from a mobile phone camera are sent to computer vision modules for eye
recognition and tracking, as well as for detecting face landmarks [5]. The Blink-To-Live
eye-based communication language has four distinct key alphabets: Left, Right, Up, and
Blink. These eye motions represent more than 60 common orders by alternating between
three different eye movement states. The translation module will show the phrases in the
patient’s native speech on the phone screen when the eye motions encoded sentences are
formed, and the synthesized voice may be heard. The Blink-To-Live system’s prototype
[14] is put to the test utilizing typical instances with various demographics. Blink-To-Live
is simpler, more adaptable, and less expensive than previous sensor-based eye-tracking
systems, and it is not dependent on any particular software or hardware.
70% of human lives can be saved via appropriate acknowledgment and ongoing well-
being monitoring [11]. The tool was created with the express purpose of reducing the
evident losses’ prosperity features throughout the approved time. The usage of GSM and
IoT to track the patient’s condition or health [11] has grown more and more practical. This
recommended approach comprises several great sensors, such as temperature, heartbeat,
eye sparkle, and SPO2 sensors, for evaluating the patient’s internal body temperature,
coronary pulse, eye advancement, and oxygen saturation level. The microcontroller for
this system, which also uses cloud enrollment, is the Arduino Uno board.
The video-based interfaces [11] discussed in this study set up optional special pro-
cedures that, in applications that just need decision orders, can replace the mouse.
Results demonstrate BlinkLink’s ability to clearly discern brash and required glints.
EyebrowClicker tests demonstrate that eyes and eyebrows may be identified and tracked,
making it possible to identify eyebrow rises with great accuracy. Both prior knowledge
of the facial zone or skin tone and unusual illumination are not necessary. The two struc-
tures operate reliably and continuously, which is a great concept for systems that must
respond to facial cues or prompts [11]. The two interfaces can be used simultaneously
with different application applications on modern PCs due to the low computing resource
required. Both EyebrowClicker and BlinkLink can evolve to integrate with other assis-
tive advancements to enhance the spot movement of communication for individuals with
disabilities. They might also be utilized to provide interfaces for plain-language viewing
of both checked and imparted language. One important semantic tool in American Sign
Language (ASL) for expressing a request is an eyebrow lift.
804 B. Prashanthi et al.
3 Research Gaps
Morse code transformation using eye blinks is an intriguing and potentially useful area
of research, especially for individuals with disabilities or in situations where voice com-
munication is not possible or practical. While there has been some work in this field,
there are still several research gaps that need to be addressed. Some of these research
gaps include:
The accuracy and speed of Morse code generation and decoding using eye blinks
can be improved. Research should focus on developing more efficient algorithms and
machine learning models for recognizing and interpreting eye blinks accurately. Design-
ing user-friendly interfaces for Morse code communication with eye blinks is crucial.
Morse Code Encryption: Securing Visual Information 805
Research should explore the development of user-friendly hardware and software inter-
faces, including eye-tracking devices, headsets, and applications that can be easily used
by individuals with different levels of motor control. These systems should be able to
adapt to changes in blinking speed and accuracy over time and accommodate users
with various eye conditions or impairments. Research should address the issue of noise
and interference in eye blink-based Morse code communication. Addressing privacy and
security concerns is crucial, especially if this technology is used for sensitive or confiden-
tial communication. It’s crucial to make sure that the gear and software utilized for eye
blink-based Morse code communication are long-term reliable and durable. Investigate
how these systems may be updated and maintained throughout time.
4 Methodology
In this approach, [2] the eye blink durations are represented by widely recognized Morse
codes demonstrated in Fig. 1, which may be used to map the eye blinks to strings [3],
which is useful for communication. The table of Morse codes is seen below.
Face detection and facial landmarks prediction are the first two sub-modules of
the facial landmarks detection module. The objective of this module is to identify the
patient’s face and extract the positions of 68 facial coordinates that correspond to various
facial features, including the eyes, mouth, nose, and other features. The words are shown
in the patient’s original tongue, such as Arabic, German, etc., thanks to a translation
module.
Important facial landmarks including the nose, eyes, brows, lips, and others are found
on the face. The patient’s eyes are the most crucial face feature in our system. In our
proposed framework, the facial landmarks module consists of two fundamental steps:
identifying the face from the pictures captured from video frames and then accurately
localizing the significant facial features on the region of interest on the face. However,
before an eye can be identified, a face must first be recognized. The facial landmark
806 B. Prashanthi et al.
Fig. 2. Positions of the 68 facial landmarks, facial landmark detection, open eyes with landmarks,
and closed eyes with landmarks are shown in (a), (b), (c), and (d) of the figure, respectively.
detector then extracts the face’s feature points, connects the outline feature points of the
eye, and calculates the eye position to complete eye identification. The iBUG 300-W
dataset was used to train the model, which is used to estimate the x and y coordinates of
68 face landmarks. The model is implemented in the dlib package. In order to identify
face landmarks on real-time pictures retrieved from video frames, our suggested solution
uses the dlib pre-trained model. The second phase involves determining whether the eye
has blinked for a brief or prolonged amount of time utilizing the size and location of
the pupil, the position and angle of the eyelids, etc., within the detected eye area. Then,
by comparing the length of the eyeblinks to a predetermined, streamlined Morse code
combination, the appropriate letter is deduced. The typo-correcting SymSpell algorithm
effectively suggests alternatives when a phrase is inadvertently misspelled.
Using a trained facial landmark detector from the dlib package, we first computed the
coordinates of 68 feature points on the face to detect eyes. By computing the gradient
in the direction of the histogram at each pixel position in the picture, the landmark
detector creates feature vectors based on histograms of directed gradients. A support
vector machine classifier is used to identify 68 facial landmarks in the resulting vector,
which may be used to define the contour of the face as well as the eyes, nose, ears, and
mouth. As a consequence, it numbers each facial feature point as shown in Fig. 2a, and
applies it to the face picture to identify landmarks that surround feature portions of the
face, such as the eyes and nose, as illustrated in Fig. 2b. The right eye in the illustration
corresponds to feature points 43–48, whereas the left eye is represented by feature points
37–42. The appropriate feature points [8] are extracted to identify blinking, but they are
extracted with a size big enough to encompass the region around the landmark—roughly
1.2 times the area of the landmark—to reduce detection error. The results of extracting
eyes while the eyes are open and closed, respectively, are shown in Fig. 2c,d. Each eye
is identified by six feature points (p1…p6).
With the use of the facial landmarks module, each eye is located using six coordinates,
and the relationship between eye height and width can be represented by the EAR [6],
which is calculated using the formula:
p2 − p6 + p3 − p5
EAR = (1)
2p1 − p4
where p1 , p2 , p3 , etc., are the coordinates of the eye’s landmarks
When it blinks, it’s roughly equal to zero. Therefore, the ratio can tell if the patient
is blinking. It computes t, a predetermined threshold value (0.2 in our application). A
single blink is recognized and can be counted as many times as blinks have happened if
Morse Code Encryption: Securing Visual Information 807
the EAR value is dropped by less than 0.2 and then increased by more than 0.2. Based
on comparing EAR with the t threshold value, the following equation shows how the
eye’s opening and shutting states are identified. When compared to the planned blink,
which lasts 800 ms, the normal blink, which lasts between 100 and 400 ms, is extremely
brief. To differentiate between a typical blink and a patient’s blink, which serves as an
alphabet in the Blink-To-Live [7] eye-based language, we employed the methodology
described. The remedy calls for checking the EAR value throughout the course of 13
video frames; if the EAR is still less than 0.2, the problem is the desired language blink.
The blink is very quick and might be normal given that the camera takes 25 frames per
second if the number of inspecting frames is less than 13 frames.
{
Eyeclosed EAR ≤ t
(2)
Eyeopen EAR > t
The red mouse pointer with a circle around it started at the top left corner of a quadrant
display like the one in Fig. 3 with quadrant navigation, and it automatically traveled
clockwise to the next quadrant every second. Seven alphabetic or control letters—A to
G, H to N, O to U, V to Z, Space, and Delete—were allocated to each quadrant, and
they remained in place until the patient chose the proper quadrant with two brief blinks.
The user had to memorize fewer Morse codes as a result, which made learning easier
and allowed for improved accuracy and quicker processing. The user inputs four brief
blinks to return to the beginning screen and chooses the alphabet present in the other
quadrants once the desired letter is output. After fully keying in the chosen letter, the
user moved to the typo correction process with one long blink and two short blinks [9].
The SymSpell algorithm was used to check for mistakes and, if required, repair
them after all the desired alphabets had been output on the screen. It is an effective open-
source library that is frequently used for natural language processing tasks like text
completion and spell checking. It performs best when using the Damerau-Levenshtein
distance metric and N-gram, and because of its quick search times, it may be used for
real-time searches. It is very good at processing enormous volumes of data. It begins by
compiling a lookup table, or lexicon, from a sizable collection of datasets so that it can
identify the correct term if a mistake occurs in an input word. Additionally, it keeps track
of the word’s frequency for use in mistake correction. The input words are then divided
into N-grams using the generated dictionary, and a Trie data structure that identifies the
808 B. Prashanthi et al.
word to which each N-gram belongs is produced. Based on this, it corrects the input
word’s typos by first looking for words that contain all of the word’s N-grams to create a
list of candidate words, scoring those words using frequency and Damerau-Levenshtein
distance, and then returning the word with the highest score as the corrected word (Figs. 4
and 5).
5 Conclusions
Different eye movements are translated into a set of everyday instructions that patients
use to convey their feelings and wants using a collection of computer vision modules and
a modified version of the Blink-To-Speak language. Only the phones with compatible
Morse Code Encryption: Securing Visual Information 809
cameras will be used by patients and carers to monitor various patients’ eye movements.
We have demonstrated that the Eye Aspect Ratio is a very efficient tool for identifying
blinks in addition to the traditional method. For spies or military people who need
to deliver a coded message without the adversary knowing, this gadget may be useful.
Currently, the following assumptions are made about how impaired people communicate.
This device can be used for military personnel to communicate in a war zone in a discreet
manner with the necessary hardware and software upgrades. Additionally, with a slight
modification to the design, it can be applied to smart home apps that allow us to interact
with household equipment with simple eye blinks.
References
1. Image Morse Code Text Input System 1 Shih-Chung Chen 2 Chung-Min Wu 1 Shih-Bin
Su 1 Department of Electrical Engineering, Southern Taiwan University 2 Department of
Electronic Engineering
2. Li, R., Nguyen, M., Yan, W.Q.: Morse Codes Enter Using Finger Gesture Recognition.
Department of Computer Science Auckland University of Technology, Auckland, 1010 New
Zealand
3. John, S.J., Sharmila, S.T.: Real time blink recognition from various head pose using single
eye. Multimed. Tools Appl. Springer Science (2018)
4. Rosebrock, A.: Eye blink detection with OpenCV, python, and dlib. https://www.pyimagese
arch.com/2017/04/24/eye-blink-detectionopencv-python-dlib/
5. Kazemi, V., Sullivan, J.: One-millisecond face alignment with an ensemble of regression
trees. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874
(2014). https://doi.org/10.1109/CVPR.2014.241
6. Dewi, C., Chen, R.-C., Chang, C.-W., Wu, S.-H., Jiang, X., Yu, H.: Eye aspect ratio for
real-time drowsiness detection to improve driver safety. Electronics 11, 3183 (2022)
7. Srividhya, G., Murali, S., Keerthana, A., Rubi, J.: Alternative voice communication device
using eye blink detection for people with speech disorders
8. A Novel Method for Eye Tracking and Blink Detection in Video Frames Leo Pauly, Deepa
Sankar Division of Electronics and Communication Engineering School of Engineering
Cochin University of Science and Technology
9. Eye Blink Detection using Local Binary Patterns Krystyna Malik, Bogdan, Smolka Silesian
University of Technology, Department of Automatic Control Akademicka 16 Str, 44–100
Gliwice, Poland
10. Kazuhiro, T., Akihiko, U., Yoshiki, M., Taiji, S., Kanya, T., Shigeru, U.: A Communication
system for ALS patients using eye blink. Int. J. Appl. Electromagn. Mech. 18, 3–10 (2003)
11. Awais, M., Badruddin, N., Drieberg, M.: Automated eye blink detection and tracking using
template matching. IEEE (2013)
12. Torii, I., Takami, S., Ohtani, K., Ishii, N.: Development of Communication Support
Application with Blinks. IEEE (2014)
13. Goyal, K., Agarwal, K., Kumar, R.: Face detection and tracking: using OpenCV. In: 2017 Inter-
national Conference of Electronics, Communication and Aerospace Technology (ICECA)
(2017). https://doi.org/10.1109/iceca.2017.8203730
14. Yuli Cristanti, R., Sigit, R., Harsono, T., Adelina, D. C., Nabilah, A., Anggraeni, N. P.: Eye gaze
tracking to operate an Android-based communication helper application. In: 2017 Interna-
tional Electronics Symposium on Knowledge Creation and Intelligent Computing (IESKCIC)
(2017). https://doi.org/10.1109/kcic.2017.8228569
810 B. Prashanthi et al.
15. Image Morse Code Text Input System 1 Shih-Chung Chen 2 Chung-Min Wu 1 Shih-Bin
Su 1 Department of Electrical Engineering, Southern Taiwan University 2 Department of
Electronic Engineering, Kun Shan University 1 No.1, Nantai St, Yung-Kang Dist., Tainan,
710, Taiwan R.O.C. 2 No.949, Dawan Rd., Yongkang Dist., Tainan, 710, Taiwan R.O.C
16. Caligari, M., Godi, M., Guglielmetti, S., Franchignoni, F., Nardone, A.: Eye tracking com-
munication devices in amyotrophic lateral sclerosis: impact on disability and quality of life.
Amyotrop. Lateral Sclerosis Frontotemp. Degen. 14, 546–552 (2013)