0% found this document useful (0 votes)
268 views136 pages

Proceedings of CIC 2016 Track 1 Final PD

International Conference on Advances in Computational Intelligence in Communication, CIC 2016

Uploaded by

Devkant Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
268 views136 pages

Proceedings of CIC 2016 Track 1 Final PD

International Conference on Advances in Computational Intelligence in Communication, CIC 2016

Uploaded by

Devkant Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

Proceedings Track 1

International Conference on Advances


in Computational Intelligence in
Communication, CIC 2016
 
 
 
19th and 20th October 2016 
Pondicherry Engineering College, 
Puducherry, India 
 
 
 

 
Published & Edited by
International Journal of Computer Science and
Information Security (IJCSIS)
Vol. 14 Special Issue CIC 2016
ISSN 1947-5500

© IJCSIS PUBLICATION 2016 
Pennsylvania, USA 
     Indexed and technically co‐sponsored by : 

 
 

 
 
 

 
 
 

 
   

 
 
 
 

 
     

     
 

 
 
 
 
 
Editorial
Message from Editorial Board
It is our great pleasure to present the CIC 2016 Special Issue (Volume 14 Track 1, 2, 3, 4, 5, 6)
of the International Journal of Computer Science and Information Security (IJCSIS). High
quality research, survey & review articles are proposed from experts in the field, promoting insight
and understanding of the state of the art, and trends in computer science and technology. It
especially provides a platform for high-caliber academics, practitioners and PhD/Doctoral
graduates to publish completed work and latest research outcomes. According to Google Scholar,
up to now papers published in IJCSIS have been cited over 6818 times and the number is quickly
increasing. This statistics shows that IJCSIS has established the first step to be an international
and prestigious journal in the field of Computer Science and Information Security. There have
been many improvements to the processing & indexing of papers; we have also witnessed a
significant growth in interest through a higher number of submissions as well as through the
breadth and quality of those submissions. IJCSIS is indexed in major academic/scientific
databases and important repositories, such as: Google Scholar, Thomson Reuters, ArXiv,
CiteSeerX, Cornell’s University Library, Ei Compendex, ISI Scopus, DBLP, DOAJ, ProQuest,
ResearchGate, Academia.edu and EBSCO among others.

On behalf of IJCSIS community and the sponsors, we congratulate the authors, the reviewers
and thank the committees of International Conference On Advances In Computational
Intelligence In Communication (CIC 2016) for their outstanding efforts to review and
recommend high quality papers for publication. In particular, we would like to thank the
international academia and researchers for continued support by citing papers published in
IJCSIS. Without their sustained and unselfish commitments, IJCSIS would not have achieved its
current premier status.

“We support researchers to succeed by providing high visibility & impact value, prestige and
excellence in research publication.” For further questions or other suggestions please do not
hesitate to contact us at ijcsiseditor@gmail.com.

A complete list of journals can be found at:


http://sites.google.com/site/ijcsis/
IJCSIS Vol. 14, Special Issue CIC 2016 Edition
ISSN 1947-5500 © IJCSIS, USA.
Journal Indexed by (among others):

Open Access This Journal is distributed under the terms of the Creative Commons Attribution 4.0 International License
(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,
provided you give appropriate credit to the original author(s) and the source.
Bibliographic Information
ISSN: 1947-5500
Monthly publication (Regular Special Issues)
Commenced Publication since May 2009

Editorial / Paper Submissions:


IJCSIS Managing Editor
(ijcsiseditor@gmail.com)
Pennsylvania, USA
Tel: +1 412 390 5159
INTERNATIONAL CONFERENCE ON
ADVANCES IN COMPUTATIONAL
INTELLIGENCE IN COMMUNICATION
CONFERENCE DATE:
19 Oct 2016 to 20 Oct 2016
WEBSITE:
http://cic2016.pec.edu/
EMAIL ID:
cic16@pec.edu
DEADLINE FOR PAPER SUBMISSION:
30 Jul 2016
ORGANIZED BY:
Dept. of Electronics & Communication Engineering, Pondicherry Engineering College
VENUE:
Hotel Accord, Puducherry, India
CITY:
Puducherry, India
CONFERENCE KEYWORDS:
Computational Intelligence, Wireless communication, Applications and Methodologies
ABOUT EVENT:
CIC 2016 aims to bring out the contemporary developments and evolving
theories,methods and applications of computational intelligence in the design of Mobile
and Wireless Communication networks. The main objective of CIC 2016 is to provide a
lively forum for the scientific community and industry across the world to present their
research findings, explore new directions in computational intelligence, probabilistic and
statistical models to solve the ever growing challenges in Wireless Communication.
CIC 2016 lays emphasis on computational intelligence techniques such as neural
networks, fuzzy systems, evolutionary algorithms, hybrid intelligent systems, uncertain
reasoning techniques, and other machine learning methods and how they could be
applied for decision making and problem solving in mobile and wireless communication
networks. The conference aims to provide an opportunity for researchers to highlight
recent developments, share insightful experiences and interactions in the areas coming
under the scope of the conference.
INTERNATIONAL EDITORIAL BOARD

IJCSIS Editorial Board CIC 2016 Special Issue Guest Editors


Dr. Shimon K. Modi [Profile] Dr. P. Dananjayan,
Director of Research BSPA Labs, Principal, Pondicherry Engineering College,
Purdue University, USA Puduchery, India - Chairperson, CIC 2016
Professor Ying Yang, PhD. [Profile] Dr. Gnanou Florence Sudha,
Computer Science Department, Yale University, Professor, Pondicherry Engineering College,
USA Publication Chair, CIC 2016
Professor Hamid Reza Naji, PhD. [Profile] Dr. R. Gunasundari,
Department of Computer Enigneering, Shahid Professor, Pondicherry Engineering College.
Beheshti University, Tehran, Iran Publication Chair, CIC 2016
Professor Yong Li, PhD. [Profile] Dr. Kai Cong [Profile]
School of Electronic and Information Intel Corporation,
Engineering, Beijing Jiaotong University, & Computer Science Department, Portland State
P. R. China University, USA
Professor Mokhtar Beldjehem, PhD. [Profile] Dr. Omar A. Alzubi [Profile]
Sainte-Anne University, Halifax, NS, Canada Al-Balqa Applied University (BAU), Jordan
Professor Yousef Farhaoui, PhD. Dr. Jorge A. Ruiz-Vanoye [Profile]
Department of Computer Science, Moulay Universidad Autónoma del Estado de Morelos, Mexico
Ismail University, Morocco
Dr. Alex Pappachen James [Profile] Prof. Ning Xu,
Queensland Micro-nanotechnology center, Wuhan University of Technology, China
Griffith University, Australia
Professor Sanjay Jasola [Profile] Dr . Bilal Alatas [Profile]
Gautam Buddha University Department of Software Engineering, Firat University,
Turkey
Dr. Siddhivinayak Kulkarni [Profile] Dr. Ioannis V. Koskosas,
University of Ballarat, Ballarat, Victoria, University of Western Macedonia, Greece
Australia
Dr. Reza Ebrahimi Atani [Profile] Dr Venu Kuthadi [Profile]
University of Guilan, Iran University of Johannesburg, Johannesburg, RSA
Dr. Dong Zhang [Profile] Dr. Zhihan lv [Profile]
University of Central Florida, USA Chinese Academy of Science, China
Dr. Vahid Esmaeelzadeh [Profile] Prof. Ghulam Qasim [Profile]
Iran University of Science and Technology University of Engineering and Technology, Peshawar,
Pakistan
Dr. Jiliang Zhang [Profile] Prof. Dr. Maqbool Uddin Shaikh [Profile]
Northeastern University, China Preston University, Islamabad, Pakistan
Dr. Jacek M. Czerniak [Profile] Dr. Musa Peker [Profile]
Casimir the Great University in Bydgoszcz, Faculty of Technology, Mugla Sitki Kocman University,
Poland Turkey
Dr. Binh P. Nguyen [Profile] Dr. Wencan Luo [Profile]
National University of Singapore University of Pittsburgh, US
Professor Seifeidne Kadry [Profile] Dr. Ijaz Ali Shoukat [Profile]
American University of the Middle East, Kuwait King Saud University, Saudi Arabia
Dr. Riccardo Colella [Profile] Dr. Yilun Shang [Profile]
University of Salento, Italy Tongji University, Shanghai, China
Dr. Sedat Akleylek [Profile] Dr. Sachin Kumar [Profile]
Ondokuz Mayis University, Turkey Indian Institute of Technology (IIT) Roorkee
Dr Basit Shahzad [Profile] Dr Riktesh Srivastava [Profile]
King Saud University, Riyadh - Saudi Arabia Associate Professor, Information Systems, Skyline
University College, Sharjah, PO 1797, UAE
Dr. Sherzod Turaev [Profile] Dr. Jianguo Ding [Profile]
International Islamic University Malaysia Norwegian University of Science and Technology
(NTNU), Norway

ISSN 1947 5500 Copyright © IJCSIS, USA.


 

CIC 2016 Committees 

Chief Patron Dr. P.Dananjayan Principal, Pondicherry Engineering College

Program Dr. Xavier Fernando Director, Ryerson Communications Lab,


Chair Ryerson University, Canada.
Dr. P.Dananjayan Professor & Dean, Pondicherry
Engineering College, India.

General Dr. M. Tamilarasi Professor & Head, Dept. of ECE,


Chair Pondicherry Engineering College, India

Technical Dr. E. Srinivasan Professor, Pondicherry Engineering


Chairs College
Dr. K.Vivekanandan Professor, Pondicherry Engineering
College
Dr. N. Sreenath Professor, Pondicherry Engineering
College
Dr. G. Nagarajan Professor, Pondicherry Engineering
College

Editorial Dr. P. Dananjayan Professor, Pondicherry Engineering


Board College
Dr. Gnanou Florence Professor, Pondicherry Engineering
Sudha College
Dr. R.Gunasundari Professor, Pondicherry Engineering
College
 
CIC 2016 TECHNICAL PROGRAM COMMITTEE
Dr. Victor Govindaswamy Concordia University, Chicago
Dr. Victor Sreeram University of Western Australia, Australia
Dr. Deepak Selvanathan Intel, USA
Dr. Jianfei Cai Nanyang Technological University, Singapore
Prof. Vallavaraj Adhinarayanan Calledonian College of Engineering, Oman.
Dr. Srinivasan Rajavelu Oracle, Dubai, UAE
Dr. Suresh Shanmugasundram Botho University, Gaborone
Dr. Sanjiv Tokekar Director, IET, DAVV, Indore, India
Dr. Vinayak M.Shed Government Engineering College, Goa.
Dr. Moinuddin Jamia Millia Islamia University, New Delhi,
Dr. M.S.Sultaone College of Engineering ,Pune, India
Dr. D.Sriram Kumar NIT, Tiruchirrapalli, India
Dr. P.Palanisamy NIT, Tiruchirrapalli, India
Dr. S.Shanmugavel Principal, National Engineering College, India
Dr. Maya Ingle IET, DAVV , Indore, India
Dr. Vaidehi V IT Dept., MIT, Chennai
Dr. K.Chandrasekaran NIT Suratkal, India
Dr. K.Gunavathi PSG tech, Coimbatore, India
Dr. G.Aghila NIT, Karaikal, India
Dr. Ravibabu Mulaveesala IIT, Ropar, India
Dr. C.Satish Kumar RGIT,Kerala, India
Dr. M. Sasikala Anna University, Chennai
Dr. A.Kavitha SSN College of Engineering, Chennai.
Dr. A.Rajeswari CIT,Coimbatore
Dr. A.K. Jayanthy SRM University.
Dr. B.Surendiran National Institute of Technology , Karaikal
Dr. C. Lakshmi Deepika PSG College of Technology,Coimbatore.
Dr.C. Malathy SRM University, Chennai.
Dr.P.Indumathi MIT, Anna University,
Dr.S.Malarkkan MVIT, Puducherry
Dr. Manjesh Bangalore University, Bangalore
Dr. V. Nagarajan Adhiparasakthi Engineering College, TN
Dr. L. Nalini Joseph Anand Institute of Higher Technology, TN
Dr. Rangaiah Leburu Raja Rajeswari College of Engg. ,Bangalore
Dr. S. Malarvizhi SRM University,Chennai
Dr.G.F.Ali Ahammed VTU, Mysuru
Dr. G.K.Rajini VIT University, Vellore
Dr. J Dhurgadevi CEG, Anna University, Chennai
Dr.K.Santhi Guru Nanak Institutions ,Hyderabad.
Dr. K. Venkatalakshmi UCE, Tindivanam.
Dr. K. Vivekanandan Pondicherry Engineering College,Puducherry
Dr. N. Sreenath Pondicherry Engineering College,Puducherry
Dr.S. Kanmani Pondicherry Engineering College,Puducherry
Dr.M. Ezhilarasan Pondicherry Engineering College,Puducherry
Dr. S. Saraswathi Pondicherry Engineering College,Puducherry
Dr. Ka. Selvaradjou Pondicherry Engineering College,Puducherry
Dr. M. Manikandan MIT,Anna University, Chennai
Dr. M. A. Bhagyaveni CEG,Anna University, Chennai
Dr. M.Malleswaran Anna University, Kanchipuram.
Dr. M. Anburajan SRM University, Chennai
Dr. J.Martin Leo Manickam St.Joseph's College of Engineering,Chennai
Dr.Diwakar R . Marur SRM University
Dr.N. Venkateswaran SSN,Chennai
Dr.Alamelu Nachiappan Pondicherry Engineering College,Puducherry
Dr.C.Christober Asir Rajan Pondicherry Engineering College,Puducherry
Dr.S.Lakshmana Pandian Pondicherry Engineering College,Puducherry
Dr.K. Saruladha Pondicherry Engineering College,Puducherry
Dr. P.V. Rao Raja Rajeswari College of Engg. Bangalore
Dr. P.T. Vanathi PSG College of Technology,Coimbatore.
Dr. Shanthi Prince SRM University,Chennai.
Dr.C.Gomathi SRM University,Chennai.
Dr. A. Lakshmi Devi SV University College of Engineering, Tirupati
Dr. K. Murugan Anna University, Chennai
Dr.R.Nakkeeran SET, Pondicherry University
Dr. T. Shanmuganantham SET, Pondicherry University
Dr. P. Samundiswary SET, Pondicherry University
Dr. R. Periyasamy NIT, Raipur.
Dr.R.Valli MVIT, Puducherry.
Dr.S.Uma Maheswari CIT, Coimbatore
Dr.J.Beatrice Seventline GITAM University, Vishakapatanam.
Dr. S. Siva Sathya Computer Science Dept. , PU
Dr. T. Chithralekha Computer Science Dept. , PU
Dr. Ravi Subban Computer Science Dept. , PU
Dr.T.Shankar SENSE,VIT University, Vellore
Dr.T.S.Indumathi VTU, Bangalore
Prof. T. Ramashri SV University College of Engineering, Tirupati
Dr. V. Jeyalakshmi CEG, Anna University, Chennai
Dr.V.P. Harigovindan NIT, Karaikal
Dr. Revathi Venkataraman SRM University, Chennai
Dr.V.R.Vijaykumar Anna University, Coimbatore
CIC 2016 Organising chairs
Dr. Gnanou Florence Sudha, Pondicherry Engineering College, India
Dr. R.Gunasundari, Pondicherry Engineering College, India

CIC 2016 SESSION CHAIRS


Dr. V. Jagadeesh Kumar Professor, IIT Madras, Chennai, India

Dr. Vitawat Sittakul Professor, King Mongkut's University of Technology, Thailand

Dr. T.G. Palanivelu Former Principal, PEC & Professor, SMVEC, Puducherry

Dr. V. Prithviraj Former Principal, PEC & Professor, REC, Chennai

Dr. M.A. Bhagyaveni Professor, CEG, Anna University, Chennai

Dr. K. Murugan Professor, CEG, Anna University, Chennai

Dr. V. Saminadan Professor, Pondicherry Engineering College, Puducherry

Dr. D. Saraswady Professor, Pondicherry Engineering College, Puducherry

Dr. S. Batmavady Professor, Pondicherry Engineering College, Puducherry

Dr. K. Kumar Professor, Pondicherry Engineering College, Puducherry

Dr. G. Sivaradje Professor, Pondicherry Engineering College, Puducherry

Dr. L.Nithyanandan Professor, Pondicherry Engineering College, Puducherry

Dr. K.Jayanthi Professor, Pondicherry Engineering College, Puducherry

Dr. V.Vijayalakshmi Associate Professor, Pondicherry Engineering College

Dr. S.Tamilselvan Associate Professor, Pondicherry Engineering College

Dr. M.Thachayani Assistant Professor, Pondicherry Engineering College

Dr. R. Sandanalakshmi Assistant Professor, Pondicherry Engineering College

Dr. A.V.Ananthalakshmi Assistant Professor, Pondicherry Engineering College


List of Papers CIC 2016 
TRACK 1 Computational Intelligence in Signal & Image Processing
CIC2016_paper 15: Analytical Framework for Identification of Outliers for Unscripted Video
Madhu Chandra G, Research Scholar, Dept.of ECE, MS Engineering College, Bangalore, VTU,
Belagavi, India
Sreerama Reddy G.M, Professor & HOD, Dept. of ECE, CBIT, Kolar, India

CIC2016_paper 24: Image Steganography With Huffman Encoding


V. Navya, Dept. of ECE, SVEC Tirupati, A.P
T. V. S. Gowtham Prasad, Dept. of ECE, SVEC Tirupati, A.P
C. Maheswari, Dept. of ECE, SVEC Tirupati, A.P

CIC2016_paper 27: Secure Image Transmission Based On Pixel Integration Technique


A. D. Senthil Kumar, Department of Instrumentation Engineering, Annamalai University, Chidambaram,
India
T. S. Anandhi, Department of Instrumentation Engineering, Annamalai University, Chidambaram, India

CIC2016_paper 60: Recognition of Gait in Arbitrary Views using Model Free Methods
M. Shafiya Banu, Department of Information science and Technology, Anna University, Chennai.
M. Sivarathinabala, Department of Information science and Technology, Anna University, Chennai.
S. Abirami, Department of Information science and Technology, Anna University, Chennai.

CIC2016_paper 67: Study on Watermarking Effect on Different Sub Bands in Joint DWT-DCT
based Watermarking Scheme
Mohiul Islam, Department of Electronics & Communication Engineering National Institute of
Technology Silchar, Assam, India
Amarjit Roy, Department of Electronics & Communication Engineering National Institute of Technology
Silchar, Assam, India
Rabul Hussain Laskar, Department of Electronics & Communication Engineering National Institute of
Technology Silchar, Assam, India

CIC2016_paper 70: Image Denoising using Hybrid of Bilateral Filter and Histogram Based Multi-
Thresholding With Optimization Technique for WSN
H. Rekha, Research Scholar, Department of Electronics engineering, Pondicherry University,
Pondicherry, India
P. Samundiswary, Assistant Professor, Department of Electronics Engineering, Pondicherry University
Pondicherry, India

CIC2016_paper 86: Chaos Based Study on Association of Color with Music in the Perspective of
Cross-Modal Bias of the Brain
Chandrima Roy, Department of Electronics &Communication Engineering, Heritage Institute of
Technology, Kolkata, India
Souparno Roy (1), Dipak Ghosh (2)
(1) Researcher, (2) Professor Emeritus, Sir C.V. Raman Centre for Physics & Music, Kolkata, India

CIC2016_paper 99: Estimation of Visual Focus of Attention from Head Orientations in a Single
Top-View Image
Viswanath K. Reddy, Assistant Professor, Department of Electronic and Communication Engineering,
M. S. Ramaiah University of Applied Sciences, Bangalore, India

CIC2016_paper 107: Face Recognition Under Varying Blur, Illumination and Expression in an
Unconstrained Environment
Anubha Pearline.S, M.Tech, Information Technology, Madras Institute of Technology, Chennai, India
Hemalatha.M, Assistant Professor, Information Technology, Madras Institute of Technology, Chennai,
India

CIC2016_paper 112: Segmentation based Security Enhancement for Medical Images


G. Vallathan, Department of Electronics and Communication Engineering, Pondicherry Engineering
College, Pondicherry, India.
K. Balachandran, Department of Electronics and Communication Engineering, Pondicherry
Engineering College, Pondicherry, India.
K. Jayanthi, Department of Electronics and Communication Engineering, Pondicherry Engineering
College, Pondicherry, India.

CIC2016_paper 117: Efficient Stereoscopic 3D Video Transmission over Multiple Network Paths
Vishwa kiran S, Thriveni J, Venugopal K R, Dept. Computer Science and Engineering
University Visvesvaraya College of Engineering, Bangalore, India
Raghuram S, Pushkala Technologies Pvt. Ltd., Bangalore, India

CIC2016_paper 130: Speaker Dependent Speech Feature Based Performance Evaluation of


Emotional Speech for Indian Native Language
Shiva Prasad K M., Research Scholar, Electronics Engg, Jain University, Bengaluru., India
G. N. Kodanda Ramaiah, Professor, HOD and Dean R & D, Dept of ECE. K.E.C., Kuppam., India
M. B. Manjunatha, Principal, A.I.T., Tumkur.,India

CIC2016_paper 131: Formant Frequency Based Analysis of English vowels for various Indian
Speakers at different conditions using LPC & default AR modeling
Anil Kumar C., Research Scholar, Electronics Engg, Jain University, Bengaluru., India.
M. B. Manjunatha, Principal, A.I.T., Tumkur.,India
G. N. Kodanda Ramaiah, Professor, HOD and Dean R & D, Dept of ECE. K.E.C., Kuppam., India

CIC2016_paper 139: A study of various approaches for enhancement of foggy/hazy images


Nandini B.M, Mohanesh B.M
The National Institute of Engineering, Mysuru, Karnataka, India.
Narasimha Kaulgud, The National Institute of Engineering, Mysuru, Karnataka, India.

   
TRACK 2 Computational Intelligence in Wireless Communication Networks
CIC2016_paper 17: Design of Cooperative Spectrum Sensing based spectrum access in CR
networks using game theory
Lavanya Shanmugavel, Fenila Janet, M. A. Bhagyaveni
Dept. of ECE, CEG, Anna University, Chennai, INDIA

CIC2016_paper 22: An Intelligent Cognitive Radio Receiver for Future Trend Wireless
Applications
M. Venkata Subbarao, Research Scholar, Department of EE, School of Engineering & Technology,
Pondicherry University, Pondicherry, India.
P. Samundiswary, Assistant Professor, Department of EE, School of Engineering & Technology,
Pondicherry University, Pondicherry, India.

CIC2016_paper 31: Vehicular Ad Hoc Networks: New Challenges in Carpooling and Parking
Services
Amit Kumar Tyagi, Research Scholar, Department of CS&E, Pondicherry Engineering College,
Puducherry-605014, India.
Sreenath Niladhuri, Professor, Department of CS&E, Pondicherry Engineering College, Puducherry-
605014, India.

CIC2016_paper 36: Efficient Energy Utilisation in Zigbee WDSN using Clustering protocol and
RSSI Algorithm
Maria Brigitta.R, Department of Electronics Engineering, School of Engineering and Technology
Pondicherry University, Puducherry-605 014, India
Samundiswary.P, Department of Electronics Engineering, School of Engineering and Technology
Pondicherry University, Puducherry-605 014, India

CIC2016_paper 37: Prediction of Spacecraft Position by Particle Filter based GPS/INS integrated
system
Vijayanandh R [1], Raj Kumar G [2]
[1], [2] – Assistant Professor, Department of Aeronautical Engineering, Kumaraguru College of
Technology, Coimbatore, Tamil Nadu, India
Senthil Kumar M [3], Samyuktha S [4]
[3] – Assistant Professor (SRG), [4] – BE Student, Department of Aeronautical Engineering,
Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India

CIC2016_paper 68: Dynamic Application Centric Resource Provisioning Algorithm for Wireless
Broadband Interworking Network
S. Kokila, Department of Electronics and Communication Engineering, Pondicherry Engineering
College Puducherry, India
G. Sivaradje, Department of Electronics and Communication Engineering, Pondicherry Engineering
College, Puducherry, India

CIC2016_paper 69: RSSI based Tree Climbing mechanism for dynamic path planning in WSN
Thilagavathi P, Research scholar, Department of Information Technology, Jerusalem college of
Engineering, Chennai 600100, India
Martin Leo Manickam J, Professor, Electronics and Communication Engineering, St. Joseph’s college
of Engineering, Chennai 600119, India

CIC2016_paper 73: Conductor Backed CPW Fed Slot Antenna for LTE application
M. Saranya, S. Robinson,Gulfa Rani
Department of ECE, Mount Zion College of Engineering and Technology, Pudukkottai, India

CIC2016_paper 74: Power Optimized and Low Noise Tunable BPF using CMOS Active Inductors
for RF Applications
A. Narayana Kiran, Assistant Professor, Department of ECE, Shri Vishnu Engineering College for
Women, Bhimavaram, India,
P. Akhendra Kumar, Research Scholar, Department of ECE, National Institute of Technology,
Warangal, Warangal, India

CIC2016_paper 76: Butterfly Shaped Microstrip Patch Antenna with Probe Feed for Space
Applications
Deepanshu Kaushal, PG student, Department of Electronics Engineering, Pondicherry University
Pondicherry, India
T. Shanmuganatham, Assistant Professor, Department of Electronics Engineering, Pondicherry
University, Pondicherry, India

CIC2016_paper 78: Primary User Emulation Attack Analysis in Filter Bank Based Spectrum
Sensing Cognitive Radio Networks
Sabiq P.V. & D. Saraswady
Dept. of ECE, Pondicherry Engineering College, Puducherry, India

CIC2016_paper 85: Ant Colony Multicast Routing for Delay Tolerant Networks
E. Haripriya, Assistant Professor of Computer Science, J.K.K.Nataraja College of Arts & Science,
Namakkal, TamilNadu, India
K. R. Valluvan, Professor and Head of ECE, Velalar College of Engineering & Technology, Erode,
TamilNadu, India

CIC2016_paper 87: An Energy-Efficient Key Management Scheme using Trust Model for
Wireless Sensor Network
P. Raja, Associate Professor, Department of ECE, Sri Manakula Vinayagar Engineering College,
Pondicherry, India
E. Karthikeyan, Department of ECE, Sri Manakula Vinayagar Engineering College, Pondicherry, India

CIC2016_paper 88: Power Efficiency analysis of Four State Markov Model based DRX
mechanism with OTSC ratio for Long Term Evolution User Equipment
R. Vassoudevan, Research Scholar, Department of Electronics Engineering, Pondicherry University,
Puducherry, India
P. Samundiswary, Assistant Professor, Department of Electronics Engineering, Pondicherry University
Puducherry, India

   
TRACK 3 Computational Intelligence in Wireless Communication Networks
CIC2016_paper 32: Ensuring Trust and Privacy in Large Carpooling Problems
Amit Kumar Tyagi, Research Scholar, Department of CS&E, Pondicherry Engineering College,
Puducherry-605014, India.
Sreenath Niladhuri, Professor, Department of CS&E, Pondicherry Engineering College, Puducherry-
605014, India.

CIC2016_paper 90: Comparison of Direct Contact Feeding Techniques for Rectangular


Microstrip Patch Antenna for X-Band Applications
R. Kiruthika, II M.Tech.(ECE), Department of Electronics Engineering, Pondicherry University,
Pondicherry
Dr. T. Shanmuganantham, Assistant Professor, Department of Electronics Engineering, Pondicherry
University, Pondicherry

CIC2016_paper 92: New Joint Non Linear Companding and Selective Mapping Method for PAPR
Reduction in OFDM System
Sandeep Dwivedi, M.Tech. student, Department of Electronics Engineering, School of Engineering and
Technology, Pondicherry University, Puducherry-605014
P. Samundiswary, Assistant Professor, Department of Electronics Engineering, School of Engineering
and Technology, Pondicherry University, Puducherry-605014

CIC2016_paper 96: Collaborative Location Based Sleep Scheduling With Load Balancing In
Sensor-Cloud
N. Mahendran, Assistant Professor, Dept of ECE, M. Kumarasamy College of Engineering, Karur,
Tamilnadu

CIC2016_paper 102: A Review on Routing Protocols of Underwater Wireless Sensor Networks


Venkateswarulu Balajivijayan, Assistant Professor, Computer Science and Engineering, Aalim
Muhammed Salegh College of Engineering, Chennai,Tamilnadu, India
Subbu Neduncheliyan, Professor, Computer Science and Engineering, Jaya College of Engineering
and Technology, Chennai, Tamilnadu, India
Ramadass Suguna, Professor, Computer Science and Engineering, SKR Engineering College,
Chennai, Tamilnadu, India

CIC2016_paper 105: A layer based survey on the security issues of cognitive radio networks
Tephillah.S, J.Martin Leo Manickam
ECE, St.Joseph’s College of Engineering, Chennai, India

CIC2016_paper 106: TCOR- Energy Efficient and Power Saving Routing Architecture for Mobile
AD HOC Networks
S. Sargunavathi Associate professor, ECE, Sriram Engineering College Chennai, India
Dr. J.Martin Leo Manickam, Professor, ECE, St Joseph’s Engineering College Chennai-119, India

CIC2016_paper 113: Cluster Head Selection in Cognitive Radio Networks using Fruit Fly
Algorithm
Umadevi K.S., School of Computing Science and Engineering, VIT University, Vellore, India.

CIC2016_paper 116: Reputation Based IDS for Gray hole Attack in MANETs
K. Ganesh Reddy, K. Radharani, K. V. Sravani, K. Mounika, K. Poojitha,
Dept. of Computer Science and Engineering, Shri Vishnu Engineering College for Women,
Bhimavaram, India
P. Santhi Thilagam, Dept. of Computer Science and Engineering, NITK Surathkal, Mangalore, India

CIC2016_paper 120: Customer friendly Fast and Dynamic Handover in Heterogeneous Network
Environment
T. Senthil Kumar, M. A. Bhagyaveni
Department of Electronics and Communication Engineering, College of Engineering, Guindy, Anna
University, Chennai, India

CIC2016_paper 121: Influence of Road side units on routing information in VANET


V. Devarajan, Department of Electronics and Communication Engineering, Pondicherry Engineering
College, Pondicherry, India
Dr. R. Gunasundari, Department of Electronics and Communication Engineering, Pondicherry
Engineering College, Pondicherry, India

CIC2016_paper 138: Type-2 Fuzzy based GPS Satellite Selection algorithm for better
Geometrical Dilution of Precision
Arul Elango G, ECE Department, Pondicherry Engineering College, Puducherry, India
Murukesh C and Rajeswari K, EIE Department, Velammal Engineering College, Chennai, India

CIC2016_paper 146: Dual Microphone Speech Enhancement Utilizing General Kalman Filter in
Mobile Communication
Vijay Kiran Battula, Department of ECE, University College of Engineering Vizianagaram, JNTUK,
Vizianagaram, INDIA
Appala Naidu Gottapu, Department of ECE, University College of Engineering Vizianagaram, JNTUK,
Vizianagaram, INDIA

CIC2016_paper 147: Artificial Bee Colony Algorithm Based Trustworthy Energy Efficient
Routing Protocol
D. Sathian, Department of Computer Science, Pondicherry University
M. Gunashanthi, Department of Computer Science, Pondicherry University
P. Dhavachelvan, Department of Computer Science, Pondicherry University

   
TRACK 4 Computational Methods in Biosignal Processing for Telemedicine
CIC2016_paper 04: Modified Local Gradient Pattern Based Computation Analysis for the
Classification of Mammogram
Narain Ponraj (1), Poongodi (2), Merlin Mercy (3)
Dept of ECE (1,2) Dept of CSE (3) Karunya University (1) KCE (2) SKCT (3) India

CIC2016_paper 09: Study on the use of Multi frequency Bioelectrical Impedance for
Classification of Risk of Dengue fever in Indian Children
Neelamegam Devarasu (1) and Gnanou Florence Sudha (2)
(1,2) Department of Electronics and Communication Engineering,
Pondicherry Engineering College, Puducherry, India.

CIC2016_paper 14: Automatic Assessment of Non-proliferative Diabetic Retinopathy using


Modified ABC Algorithm with Feed Forward Neural Network
Vaishnavi J, Ravi Subban, Anousouya M and Punitha Stephen,
Department of Computer Science, Pondicherry University, India

CIC2016_paper 21: An Effective Liver Cancer Diagnosis through Multi – Temporal Fusion and
Decorrelation Stretching Techniques
B. Lakshmi Priya, S. Joshi Adaikalamarie, K. Jayanthi
Department of ECE, Pondicherry Engineering College, Puducherry

CIC2016_paper 26: Detection of Microcalcifications in Digital Mammograms using Fuzzy Euler


Graph Segmentation method
D. Saraswathi, Department of ECE, Manakula Vinayagar Institute of Technology, Madagadipet,
Puducherry, India
E. Srinivasan, Department of ECE, Pondicherry Engineering College, Puducherry, India

CIC2016_paper 56: Non-Invasive Measurement of Cholesterol Levels Using Eye Image Analysis
S.V. Mahesh Kumar (a,*), R. Gunasundari (a) and N. Ezhilvathani (b)
(a) Department of Electronics and Communication Engineering, Pondicherry Engineering College,
Puducherry, India.
(b) Department of Ophthalmology, Indira Gandhi Medical College and Research Institute, Puducherry,
India.

CIC2016_paper 83: Neural Based Non-Invasive Diagnosis and Classification of Sepsis


R. Sandanalakshmi, Rajagopalan.P, AjaiKaran.B, Rajarajan.G
Dept. of Electronics and Communication Engg., Pondicherry Engineering College, India

CIC2016_paper 93: An Efficient Noise Cancellation Approach suitable for Respiratory Sound
Signals
Prashanth B.S., Department of Electronics & Communication Engineering, Pondicherry Engineering
College, Pillaichavadi, Puducherry,
Jayanthi K., Department of Electronics & Communication Engineering, Pondicherry Engineering
College, Pillaichavadi, Puducherry,

CIC2016_paper 108: Chaotic Cuckoo search and Kapur/Tsallis approach in segmentation of


T.Cruzi from blood smear images
V. Shanjita Lakshmi, Shiffani. G. Tebby, D. Shriranjani, V. Rajinikanth
Department of Electronics and Instrumentation Engineering, St. Joseph’s College of Engineering
OMR, Chennai 600 119, Tamil Nadu, India.

CIC2016_paper 109: Evaluation of Hypotension using Wavelet and Time Frequency Analysis of
Photoplethysmography (PPG) Signal
Remya Raj, Research scholar, Department of ECE, SRM University, Kattankulathur, Chennai, Tamil
Nadu-603203, India
Dr. J. Selvakumar, Asst. Professor(SG), Department of ECE, SRM University, Kattankulathur, Chennai,
Tamil Nadu-603203, India
Dr. M. Anburajan, Dept of Biomedical Engineering, SRM University, Kancheepuram, Tamil Nadu, India

CIC2016_paper 110: Non-invasive cuffless diagnosis of hypertension using dynamic thermal


imaging features
Dr. T. Jayanthi, Dept of Biomedical Engineering, SRM University, Kancheepuram, Tamil Nadu, India
Dr. M. Anburajan, Dept of Biomedical Engineering, SRM University, Kancheepuram, Tamil Nadu, India

CIC2016_paper 134: Classification of abnormal breast neoplasm from mammogram images


Angeline SP Kirubha, Anburajan M
Biomedical Engineering Department, SRM University, Kattankulathur- 603 203, India

CIC2016_paper 140: Detection of Diabetic Retinopathy based on Classification Algorithms


Vaishnavi J, Ravi Subban, Anousouya M and Punitha Stephen,
Department of Computer Science, Pondicherry University, India

CIC2016_paper 33: 3D Ultrasound Imaging For Automated Kidney Stone Detector On FPGA
K. Viswanath, Research Scholar, Member IEEE, Department of ECE, Pondicherry Engineering College
Pondicherry, India
Dr. R. Gunsundari, Professor, Department of ECE, Pondicherry Engineering College, Pondicherry,
India

   
TRACK 5 Computational Intelligence Methodologies
CIC2016_paper 44: Context Aware Web Service Discovery Optimization By Chameleon Inspired
Algorithm
(1) A. Amirthasaravanan, (2) Paul Rodrigues, (3) R. Sudhesh
(1) Department of Information Technology, University College of Engineering, Villupuram, Tamilnadu,
India
(2) DMI Engineering College, Chennai, Tamilnadu, India
(3) Department of Mathematics, BIT Campus, Tiruchirappalli, Tamilnadu, India

CIC2016_paper 77: OntoMD: Ontology based Multidimensional Schema Design Approach


M. Thenmozhi, Assistant Professor, Dept. of CSE, Pondicherry Engineering College, Puducherry, India
P. Ezhilarasi, Assistant Professor, Dept. of CSE, Raak College of Engineering & Technology,
Puducherry, India

CIC2016_paper 95: Various Computing models in Hadoop Eco System along with the
Perspective of Analytics using R and Machine learning
Uma Pavan Kumar Kethavarapu, Research Scholar, Department of Computer Science and
Engineering, Pondicherry Engineering College
Dr. Lakshma Reddy Bhavanam, Principal, BCC College, Bangalore  

CIC2016_paper 114: CRNN: CAPTCHA Recognition using Neural Network


Umadevi K.S., School of Computing Science and Engineering, VIT University, Vellore, India.
Dharmendra Singh Chandel, School of Computing Science and Engineering, VIT University, Vellore,
India.

CIC2016_paper 115: Using Semantic Fields For Generating Research Paper Summaries
A. L. Agasta Adline, Department of Information Technology, Easwari Engineering College, Chennai
600089, TamilNadu, India
Harish M, Department of Computer Science and Engineering, College of Engineering Guindy, Anna
University, Chennai 600025, TamilNadu, India
G.S. Mahalakshmi, Department of Computer Science and Engineering, College of Engineering Guindy,
Anna University, Chennai 600025, TamilNadu, India
S. Sendhilkumar, Department of Computer Science and Engineering, College of Engineering Guindy,
Anna University, Chennai 600025, TamilNadu, India

CIC2016_paper 119: Emotion Recognition from Poems by Maximum Posterior Probability


Sreeja. P.S, Department of Computer Science, CEG, Anna University, Chennai, India
G.S. Mahalakshmi, Department of Computer Science, CEG, Anna University, Chennai, India

CIC2016_paper 133: Fast And Enhanced Algorithms For Dynamic Dataset


R. Kavitha Kumar, Department of Computer science and Engineering, Pondicherry Engineering
College, Pondicherry
J. Jayabharathy, Department of Computer science and Engineering, Pondicherry Engineering College,
Pondicherry

CIC2016_paper 136: Effective OE Position-Wise Mutation Technique for Permutation Encoded


Genetic Algorithm to Solve School Bus Routing Problem: mTSP Approach
R. Lakshmi, Assistant Professor, Department of Computer Science, Pondicherry University,
Puducherry, India

CIC2016_paper 142: Two Run Morphological Analysis for POS Tagging of Untagged Words
Betina Antony J, Dept of Computer Science and Engineering, College of Engineering Guindy, Anna
University, Chennai 600025, Tamil Nadu, India.
G. S. Mahalakshmi, Dept of Computer Science and Engineering, College of Engineering Guindy, Anna
University, Chennai 600025, Tamil Nadu, India
CIC2016_paper 143: Bio-Inspired Schedulers for Public Cloud Environment
Vaithianathan Geetha, Department of Information Technology, Pondicherry Engineering College,
Puducherry-605014.

CIC2016_paper 148: A Scrutiny and Appraisal of Various Optimization Algorithm to Solve Multi-
Objective Nurse Scheduling Problem
M. Rajeswari, Research Scholar, Department of Computer Science, Pondicherry University
S. Jaiganesh, Research Scholar, Department of CS, R&D Centre, Bharathiar University, Coimbatore.
P. Sujatha, Assistant Professor, Department of Computer Science, Pondicherry University
T. Vengattaraman, Assistant Professor, Department of Computer Science, Pondicherry University
P. Dhavachelvan, Professor, Department of Computer Science, Pondicherry University,

   
TRACK 6 Computational Intelligence Applications
CIC2016_paper 2: Information Detection System using 4T Dual Port CAM
V. Bharathi, Associate Professor, ECE Department, Sri Manakula Vinayagar Engineering College,
Puducherry, India
A. Ragasaratha Preethee, ECE Department, Sri Manakula Vinayagar Engineering College,
Puducherry, India

CIC2016_paper 16: VLSI Implementation of Reverse Converter via Parallel Prefix Adder for
Signed Integers
(1) P. Rajagopalan , Puducherry – 605 014, India
(2) A. V. Ananthalakshmi, Assistant Professor, Pondicherry Engineering College, Puducherry – 605
014, India.

CIC2016_paper 63: Optimal Tuning of Coordinated Controller using BBO Algorithm for Stability
Enhancement in Power System
Gowrishankar Kasilingam (1*), Jagadeesh Pasipuleti (2)
(1*) Research Scholar, Department of Electrical Power, Universiti Tenaga Nasional (UNITEN),
Malaysia
(1*) Associate Professor, Department of ECE, Rajiv Gandhi College of Engg. & Tech.,Pondicherry,
INDIA
(2) Associate Professor, Department of Electrical Power, Universiti Tenaga Nasional (UNITEN),
Malaysia

CIC2016_paper 80: Smart Phone Based Speed Breaker Early Warning System
Viswanath K. Reddy (1), and Nagesh B. S (2)
(1) Assistant Professor in the Department of Electronic and Communication Engineering in M.S.
Ramaiah University of Applied Sciences, Bangalore,
(2) Robert Bosch Engineering and Services India Pvt.Ltd, Bangalore

CIC2016_paper 82: B+ Indexing for Biometric Authentication using Fused Multimodal Biometric
Jagadiswary. D, Electronics and Communication Engineering, Pondicherry Engineering College,
Puducherry, India
Dr. D. Saraswady, Electronics and Communication Engineering, Pondicherry Engineering College,
Puducherry, India

CIC2016_paper 118: Smart Logistics for Pharmaceutical Industry based on Internet of Things
(IoT)
M. Pachayappan (1), Nelavala Rajesh (2), G. Saravanan (3)
(1) Assistant Professor, Department of International Business, School of Management, Pondicherry
University , Puducherry – 605 014, India
(2) Assistant Professor, Department of Electronic and Communication, Arunai College of Engineering,
Tiruvannamalai – 606601, India
(3) Assistant Professor, Department of Electronic and Communication, Valliammai Engineering
College, Kattankulathur - 603 203, India

CIC2016_paper 123: Determination of Photovoltaic Cell Model Parameters from One-diode


Model using Firefly Algorithm coded in Python
G. Kanimozhi, Department of Physics, Pondicherry Engineering College, Pondicherry, India
R. Rajathy, Department of EEE, Pondicherry Engineering College, Pondicherry, India
Harish Kumar, Department of Physics, Pondicherry Engineering College, Pondicherry, India.

CIC2016_paper 124: Estimation of maximum power point in photovoltaic cell based on


parameters identification approach by Ant Lion Optimizer implemented in IPython
G. Kanimozhi, Department of Physics, Pondicherry Engineering College, Pondicherry, India
R. Rajathy, Department of EEE, Pondicherry Engineering College, Pondicherry, India
Harish Kumar, Department of Physics, Pondicherry Engineering College, Pondicherry, India.

CIC2016_paper 126: Smart Phone Keylogger Detection Technique Using Support Vector
Machine
S. Geetha, Research Scholar, Dept. of Banking Technology, Pondicherry University
G. Shanmugasundaram, Assistant Professor, Department of IT, SMVEC
BharathKumar V., UG Students, Department of IT, SMVEC
V. Prasanna Venkatesan, Associate Professor, Dept. of Banking Technology, Pondicherry University

CIC2016_paper 145: Standby Mode Subthreshold Leakage Power Analysis in Digital Circuits
with Variations in Temperature
Amuthavalli. G, Department of ECE, Pondicherry Engineering College, Puducherry, India
Gunasundari. R, Department of ECE, Pondicherry Engineering College, Puducherry, India

 
 
 
 
 
TRACK 1 
Computational Intelligence in 
Signal & Image Processing 
 
 
 
 
 
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Analytical Framework for Identification of Outliers


for Unscripted Video

Madhu Chandra G
Research Scholar Sreerama Reddy G.M
Dept.of ECE, Professor & HOD
MS Engineering College, Bangalore, Dept. of ECE,
VTU, Belagavi, India CBIT, Kolar, India
madhu.guru1984@gmail.com

Abstract— The area of video analytics has recently gained a pace II. RELATED WORK
with the advancement of distributed mining methodologies. For a
given scene, there are various forms of event, which are quite The significant study for video analytics was found in the
unpredictable in nature especially for unscripted data analysis. work of Aradhya and Pavithra [1] have introduced a study that
Such unpredictable events with respect to context are termed as performs recognizes text from the multiple language video. The
outliers. After reviewing the recent research contribution in this procedure utilizes k-mean clustering, wavelet transform, and
regards, we find that the existing video analytics are quite in Gabor filter by keeping in mind the end goal to perform video
infancy stage to understand such outliers and apply mining indexing. Despite the fact that, the study is not
operation. The presented manuscript introduces a technique
straightforwardly a use of video analytics but rather its system
which performs precise detection of outliers with better com-
putational performance. The technique extracts features using
is exceptionally assistive in video analysis. The latest study
dictionary-based approach and extraction contextual data of an towards video analysis was completed by Angelov et al. [2]
object using unsupervised learning technique. The outcome of the and planned to do continuous analysis of video. The system
study was compared with existing system to excel better utilizes iterative density estimation and local clustering average
performance with respect to accuracy factor and reduced estimation. The system used the Lucas Kanade mechanism and
computational capability. in addition RANSAC mechanism to perform better objects
detection from air borne vehicle. In the work of Ayed et al. [3],
Keywords- Video Analytics; Video Mining; Data Mining; utilized MapReduce model as a part of request to perform
Outlier Detection; Multimedia Analytics
mining of video data set. Aryanfar et al. [4] have utilized a
I. INTRODUCTION customary classifier (bolster vector machine and innocent
Bayes) with a specific end goal to perceive the human activity.
A typical video consists of multiple information which are
The result of the study was contrasted with existing component
the source of interest for any data mining operation. There are
with find around 90% of identification rate. The study result
three types of information existing within a video e.g. i) low-
level information (e.g. texture, color, shape, etc.), ii) semantic was affirmed with visual results, exemplary model time, and
infor-mation (e.g. spatial factors-characters, location, objects handling time taken by MapReduce. Cai et al. [5] have talked
etc. and temporal factor-video sequences and movement of about the video analysis keeping in mind the end goal to Gauge
objects), and iii) Syntactic information (salient objects, the engagement of clients. The statistical approach was used to
respective position and timing attributes etc.). Another perform video prediction. Xu et al. [6] have introduced a
significant factors that plays a crucial role in designing video strategy that empowers, mining operation on 3D data of video
analytics are modalities. There are three types of video over cloud environment. The method additionally crypto-
modalities viz. i) auditory (music, speech, surrounding sounds), realistic performed over the video data. The result of the study
ii) visual (seen of video), and iii) textual modalities (text was assessed regarding the time required for encrypted and
contents of video). Presences of outliers are the most original video coding. Chen et al. [7] have exhibited a model
challenging to be identified owing to lack of predefined called as PeakVizor which means to extricate the examples of
knowledge about the scene. This paper presents technique of advanced online courses mechanism. The study utilizes Glyph
developing a new video analytical model that can perform cost representation strategy as a part of request to perform
effective mining operation for solving the problem of outlier information reflection and also crest location from MOOC
detection of input video. Section II discusses about the related (Massive Open Online Courses) recordings. The concentrate
work followed by brief discussion of problem identification in additionally helps with defining relationship among various
Section III. Proposed system is discussed in Section IV online users. Study on video observation was likewise
followed by research methodology in Section V. Finally result
completed by Mao et al. [8]. The researchers have exhibited a
discussion is made in Section 6 followed by conclusion in
novel traceability component for framework over video
Section VI.
reconnaissance framework. The method likewise presents a

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 1


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

foundation displaying utilizing direction era with the end goal


of traceability for food system. Kim et al. [9] have built up a
method that can perform recognizable proof of distant conduct
of users. The explanatory module outline of the researcher
considers the inputs as development of clients from webcam
and that adopt multimodal data collection feature to gather data
taken after the data refinement. Hence for outlier’s
classification of facial expression, log data and facial
expression the method can be effective. Shao and Fu [10] have
talked about video analysis which utilizes different self and
view learning mechanism. The study utilizes three distinct
models e.g. a self-learning model, learning model, and
component extraction model. The work of Madhu and
Sreerama [11] given the concepts and research lineup for video
analysis.
Hence, it is examined that there are multiple techniques of Figure. 1 Schema adopted in Outlier Detection
video mining processed introduced by researchers. However,
V. RESEARCH METHODOLOGY
all the above mentioned techniques has associated advantage
and limitations too e.g. lesser supportability of existing system The design and development of the proposed system is carried
with future distributed video mining process, less work towards out using analytical modeling approach. The system defines the
outlier detection, less standard work with benchmarked term outlier as any unexceptional events captured in video
outcomes, less research on event mining. frames. The developed architecture for proposed system is
shown in Fig.2.
III. PROBLEM IDENTIFICATION
The study for automatic video surveillance system can have
better identification in all the situations, but the classification of
different behavioral events is the issue in it. Typically, there are
different types of visual sensors that catch the events behavior
in a monitoring unit. The normal signal can be taken for the
feature extraction which can be considered for identifying the
abnormal behavior. The applications relating to automatic
event identification of anomalies are very helpful for
minimizing the need of superfluous video data to be processed.
But the drawback is that there no such works contributed,
concentrating on in-depth video abnormality detection and
classification of it.

IV. PROPOSED SYSTEM


The significant objective of the proposal is to find the video
outliers. The schematic representation of proposed stages is
given in Fig.1. In the figure two video outliers of an event are
present i.e. i) primary and ii) secondary outlier. In this the Figure. 2 Schematic Architecture of Proposed System
primary outlier is important event of video that is particularly
not quite the same as its spatial and temporal neighbor events. The main technique that is been used for identifying the
Next is secondary outlier where the multiple events are abnormality in the given video. Hence in the beginning, a
existing. The review of existing video analysis will histogram is developed with the end goal of extracting the
demonstrate that greater part of its exploration are more features which having more complexity in pattern. In next
centered around tending to the issues of primary outliers step, probability theory was applied to evaluate the
however very less on secondary outliers. The example for the abnormality in the data. Hence the identification of
primary outlier is video with an object with abnormal velocity abnormality is the integration of both spatial and temporal data
or structure, while the traffic system is the example of factors. With this methodology we can have better
secondary outlier. Consequently, a tertiary outliers is the most identification of abnormality. Thus to achieve this following
exceedingly terrible i.e. investigating the events that have steps are need to be followed.
insignificant feasibility of event in a video. • A mathematical model is needed to be developed for
extraction of video patterns.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 2


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

• A framework will likewise utilize pack of visual words in outlier detection. However, Cong et al. [12] uses a method
model for video pattern extraction. called as dynamic patch grouping where the author uses k-
• An empirical model is needed to develop to minimize the means clustering resulting in non-consideration of local
errors during vector spar city based image reconstruction. patterns. This causes declination of performance metrics to a
• An identification model is needed to be developed to significant level. Moreover Cong et al. [12] have not
enhance the video abnormality or outliers characteristics. incorporated optimization technique like us. Proposed system
• Different scenarios are needed to be considered for uses a simple optimization technique discussed by Li and
performance analysis of the method with both real time Ngom [13]. The processing time for identification of outlier is
approximately 0.0265 seconds in 4 GB RAM with Core i3
and other video data sets.
processor and the proposed system doesn’t store any values
• The proposed method effectiveness is needed to identify
over run time leading to extremely less time complexity.
as true or false. Hence, the outcome of the proposed study is good in outlier
detection while the approach used can be said to be a cost
VI. RESEULT DISCUSSION effective one with respect to computational complexity factor.
The proposed technique was evaluated with UCSD
pedestrian dataset. Our analysis was done in two parts i.e.
VII. CONCLUSION
developing a trained video and given a query of new test video
to identify the outlier objects in mobile condition. We make the This model gives genuine abnormality identification for the
discussion of result analysis with respect to both visual and desired video. The proposed study with the probability
numerical outcomes. mechanism, the objectives achieved are correct and accurate.
Also the proposed framework has achieved the accuracy of
A. Visual Outcomes Analysis 95% in video abnormality in different scenarios.
Fig.3 shows the 3 visual outcomes. The first visual outcome
is for all objects with similar context i.e. pedestrian. The REFERENCES
second and third visual outcomes are for highlighting the [1] V.N.M. Aradhya and M. S. Pavithra, “A comprehensive of transforms,
outlier object which is basically a non-pedestrian. We have Gabor filter and k-means clustering for text detection in images and
testified with other test images to observe similar behavior of video", Applied Computing and Informatics, 2014
the proposed mechanism.
[2] P. Angelov, P. S-Tehran, C. Clarke, “AURORA: autonomous real-time
on-board video analytics”, Neural Comput & Applica, 2016
[3] A.B. Ayed, M. B. Halima, and A. M. Alimi, “MapReduce Based Text
Detection in Big Data Natural Scene Videos”, Procedia Computer
Science, Vol. 53, pp.216-223, 2015.
[4] A. Aryanfar, R. Yaakob, A. A. Halin, N. Sulaiman, K. A. Kasmiran, and
L. Mohammadpour, “Multi-View Human Action Recognition Using
Wavelet Data Reduction and Multi-Class Classification”, Procedia
Computer Science, Vol. 62, pp.585-592, 2015.
[5] P. Vollucci, B. Le, J. Lai, W. Cai, Y. Ye, G. Necula, and D. Wroblewski,
Figure. 3 Visual Outcomes of Study
“Online Video Data Analytics”, 2015
B. Numerical Outcome Analysis [6] T. Xu, W. Xiang, Q. Guo, and L. Mo, “Mining cloud 3D video data for
For the purpose of better benchmarking, we compare the interactive video services”, Mobile Networks and Applications, Vol. 20,
numerical outcomes of proposed system with Cong et al. [12] No. 3, pp. 320-327, 2015.
work with respect to standard performance metrics e.g. recall
[7] C.H.E. N. Qing, Y. Chen, D. Liu, C. Shi, Y. Wu, and H. Qu,
rate, precision rate, specificity, and F1-Score (Table 1).
“PeakVizor: Visual Analytics of Peaks in Video Clickstreams from
Table 1 Comparative Numerical Outcome Massive Open Online Courses”, 2015.

Techniq Reca Precis Specifi F1- [8] B. Mao, J.He, J. Cao, S.W. Bigger, and T. Vasiljevic, “A framework for
ues ll ion city Score food traceability information extraction based on a video surveillance
system”, Procedia Computer Science, Vol. 55, pp.1285-1292, 2015.
Propose 0.76 0.998 0.9992 0.76
[9] Y.B. Kim, S. J. Kang, Sang Hyeok Lee, Jang Young Jung, Hyeong
d 7 62 5 282
Ryeol Kam, Jung Lee, Young Sun Kim, Joonsoo Lee, and Chang Hun
Cong 0.34 0.917 0.8672 0.51 Kim. "Efficiently detecting outlying behavior in video-game
[12] 816 65 2 62 players." PeerJ 3 (2015): e1502.

The similarity between proposed and Cong et al. [12] is [10] M. Shao, and Y.Fu, “Deeply Self-Taught Multi-View Video Analytics
usage of similar dataset, similar goal, and adoption of context Machine for Situation Awareness”, AFA Cyber Workshop, White Paper,
2015

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 3


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[11] Madhu Chandra G. and Sreerama Reddy G M.. Insights to Video


Analytic Modelling Approach with Future Line of
Research.International Journal of Computer Applications 147(7):15-24,
August 2016.
[12] Y. Cong, J. Yuan, Y. Tang, “Video Anomaly Search in Crowded Scenes
via Spatio-temporal Motion Context”, IEEE Transactions on Information
Forensics and Security, vol.8, Iss.10, 2013
[13] Y. Li and A. Ngom, “The non-negative matrix factorization toolbox for
biological datamining”, Source Code for Biology and Medicine, Vol.8,
Iss.10, 2013.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 4


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Image Steganography With Huffman


Encoding
V. Navya1 T.V.S. Gowtham Prasad2 C.Maheswari3
Dept. of ECE, Dept. of ECE, Dept. of ECE,
SVEC Tirupati, A.P, SVEC Tirupati, A.P, SVEC Tirupati, A.P,
vuppalapatinavya@gmail.com, tvsgowtham@gmail.com, maheswari937@gmail.com

Abstract--Steganography is the technique used for information the pixels of the images are
concealing messages inside the appropriate cover files misrepresented so as unviewable to the others and the
such as images, video, audio files. The steganography changes applied in the image are indefinable. The
main objective is to share securely in such a way that image for embedding the information is called
the required information is not visible to the viewer.
The information which is to be transmitted over the
envelop image and it becomes stego image after
envelop image is converted to 0’s and 1’s form by trouncing secret information[2].
Huffman coding. This converted code is embedded The compression is done for covert
inside the envelop image by varying the Least information for enhancing the hiding capability and
Significant Bit (LSB) of each of the pixel values of protection. The general standard of information
information can be decrypted by Huffman Table which compression algorithms on content files is to
is embedded in envelop image. The information can be transform a sequence of characters into a new
decrypted by Huffman Table which is embedded in sequence which contains the same information but
envelop image itself so that the stego image becomes with new length as small as possible. The proficient
impartial information to the viewer. The algorithm has
elevated capacity and superior invisibility by Huffman
information compression algorithm is elected
Encoding. Furthermore, Peak Signal to Noise Ratio according to some scales like: compression size,
(PSNR) of stego image with envelop image shows better compression ratio, processing time or speed, and
result, and agreeable protection is maintained since the entropy[4].
covert information cannot be extracted without Steganography techniques can be used in
knowing the decoding rules and Huffman Table. spatial domain or in frequency domain. Spatial
domain techniques are easier than frequency domain
Keywords--Steganography;Huffman Encoding; LSB techniques and many steganographic methods are
proposed in spatial domain. The basic steganography
I. INTRODUCTION technique in spatial domain is LSB replacement,
where the least significant bit in each pixel of image
Steganography is mainly used to share is replaced with the covert information bit[6].
information securely such that the true message is not The projected approach uses Huffman
perceptible to the viewer. Hiding the covert coding for information compression and then the
information in an image is called image compressed information is finally hidden within an
steganography. image using basic steganography technique i.e., LSB
Replacement.

II. PROPOSED METHOD


A. Embedding:
Spatial domain steganographic method based on
Huffman encoding for hiding a outsized amount of
information with elevated protection, superior
Figure1: Image Steganography invisibility and no loss of covert information[1,3].
The main purpose here is to develop a method which
Consider a colour image having pixel will provide a improved protection to the secret
intensities in the choice of 0-255. The image image without compromising on the quality of the
proportions are to be considered i.e., height and stego image. Our algorithm has three main parts as
width. The information which need to send is to be shown in figure.
concealed within that image. To hide the covert

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 5


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure2: Text embedding in Envelop image


Figure2.2: Extraction of text
First, it embeds the Huffman encoded bit stream
of the secret information into the envelop image. D. Extraction Algorithm:
Second, it embeds the size of the encoded bit stream
Input: An M X N stego-image.
into the envelop image. Third, it also embeds the
Output: A covert information.
Huffman table corresponding to the covert image into
Step-l: Read the Stego Image and extract the size of
the envelop image[7]. For embedding the covert text,
the Huffman encoded bit stream from the first 16
LSB Substitution Method is used because the
pixels.
alteration in an image is not visible to our uncovered
Step-2: Using the size found in Step-l extract the
eye.
Huffman encoded bit stream by extracting the LSB of
B. Embedding Algorithm: pixels from 17 pixels.
Step-3: Construct the Huffman table by extracting
Input: An M X N Envelop image and covert
the LSB of pixels.
information.
Step-4: Decode it by using the Huffman table
Output: An M X N stego-image.
obtained in Step-3 to extract the covert information
Step-l: Read both the Envelop image and the Covert
from the stego image.
information
Step-5: End
Step-2: compute the size of the Covert information.
III. RESULTS AND DISCUSSION
The size of the Covert information multiplied by 8
Image Steganography can be performed by
(for 8 bit images) should be less than the size of the
considering envelop files as images. Let us consider
Envelop image.
the envelop images as Lenna, Cameraman, Boat and
Step-3: Obtain Huffman table of covert
Baby[5]. The data can be hidden inside those envelop
information/image.
images and the obtained image is ‘Stego Image’. The
Step-4: Find the Huffman encoded binary bit stream
data can be extracted by taking stego image as input.
of secret-message by applying Huffman encoding
technique using Huffman table obtained in Step-3. Table I: Comparison between original and stego
Step-5: Calculate size of Huffman encoded bit images with histograms
stream.
Step-6: Store the size found in Step-5 and represent it Original images and their Stego images and their
in binary form. histograms histograms
Step-7: Store it in the Least Significant Bit (LSB) of
binary form of first 16 pixels of envelop image.
Step-8: Write the new Stego Image into the disk.
Step-9: End.
C. Extraction:
The content can be extracted from stego
image by taking least significant bit from each pixel
of stego image. The binary numbers are then given
for Huffman decoding from that decoded binary bits
form a text information.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 6


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

The PSNR can be compared for different


texts using Huffman Coding and Arithmetic Coding.
Here, the different envelop images as Lenna, Cameraman:
Cameraman, and Boat are considered.

Lenna
Huffman Coding Arithmetic Coding
Size Encoded MSE PSNR Size Encoded MSE PSNR
Bit Size Bit Size
4.02KB 16910 0.0093 68.4892 4.02KB 19760 0.0112 67.6366
10KB 25835 0.0139 66.7415 10KB 49758 0.0245 63.3225
50KB 210087 0.0812 59.0713 50KB 245654 0.0844 58.9003

Cameraman
Huffman Coding Arithmetic Coding
Size Encoded MSE PSNR Size Encoded MSE PSNR
Boat:
Bit Size Bit Size
4.02KB 16910 0.0115 67.5424 4.02KB 19760 0.0140 66.6794
10KB 25835 0.0172 68.4442 10KB 49758 0.0377 62.4064
50KB 210087 0.0791 59.1837 50KB 245654 0.0816 59.0474

Boat
Huffman Coding Arithmetic Coding
Size Encoded MSE PSNR Size Encoded MSE PSNR
Bit Size Bit Size
4.02KB 16910 0.0093 68.4442 4.02KB 19760 0.0114 67.5858
10KB 25835 0.0141 66.6768 10KB 49758 0.0309 63.2591
50KB 210087 0.0813 59.0642 50KB 245654 0.0843 58.9081
IV. CONCLUSION
A. Plot Representation The proposed image steganography based on
Huffman Encoding and Arithmetic Coding. The
The graph is plotted for Huffman Coding and
Huffman Algorithm improves the security and
Arithmetic Coding on PSNR and Text Size for
quality of stego image and is better in comparison
different images. We consider text size on X-axis and
with other existing algorithms. According to results,
PSNR on Y-axis. The resultant is as follows:
the stego images are identical to that of cover images
-----: Huffman Coding
and is difficult to differentiate between them. We
-----: Arithmetic Coding
have achieved 100% recovery of the secret image
Lenna:
that means original and extracted secret images are
identical. The stego image also contains the size of
the text which is embedded using compression
algorithms,so that the stego image itself is the
standalone information to the decoder.The decoder
only needs to know the extraction algorithm to
extract the secret message.

REFERENCES

[1] RigDas, ThemrichonTuithung,” A Novel


Steganography Method for Image Based on
Huffman Encoding” ,ISSN:978-1-4577-0748-
3,Vol.3, June 2012.
[2] Atallah M.Al-Shatnawi, “A New Method in
Image Steganography with Improved Image Quality”
Applied Mathematical Sciences, Vol. 6, 2012.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 7


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[3] A.Nag,S.Biswas,D.Sarkar, P. P. Sarkar, "A Novel Processing as ECE faculty. Interesting Areas are
Technique for Image Steganography Based on Digital Signal Precessing, Array Signal Processing,
Huffman Encoding". International Journal of Image Processing, Video surveillance, Embedded
Computer Science and Information Technology,
Systems, Digital Communications.
Volume 2, Number 3, June 2010.
[4] Abbas Cheddad, Joan Condell, Kevin Curran,
Paul McKevitt. "Digital Image Steganography:
SurSurvey and Analysis of Current Methods".
ELSEVIER Journal on Signal Processing 90 (2010)
7727-752.
[5] Gonzalez, R.C. and Woods, R.E., Digital Image
Processing using MA TLAB, Pearson
C.Maheswari, Assistant Professor, Dept of ECE, Sree
Education,Tndia,2006.
Vidyanikethan Engineering College, A. Rangampet,
[6] Alkhrais Habes, "4 least Significant Bits
Tirupati received M.Tech in DECS From JNTUA
Information Hiding Implementation and Analysis",
COLLEGE OF ENGINEERING PULIVENDULA,
ICGST Int. Conf. on Graphics, Vision and Image
B.Tech in Electronics and Communication
Processing (GVIP-05), Cairo, Egypt, 2005.
Engineering from YSR Engineering college of yogi
[7] Neil F. Jhonson, Sushil Jajodia. "Exploring
vemana university, proddatoor kadapa district.
Steganography: Seeing the Unseen". IEEE paper of
Interesting Areas Image Processing, Digital Signal
February 1998.
Processing, Image Processing, ERTS.
AUTHORS PROFILE:

Ms. V.Navya, Assistant


Professor, Dept of ECE, Sree Vidyanikethan
Engineering College, A. Rangampet, Tirupati
received B.Tech in Electronics and Communication
Engineering from YITS, Tirupati Interesting Areas
Digital Signal Processing, Image Processing,
Embedded Systems, Digital Communications

Mr. T V S Gowtham Prasad,


Assistant Professor, Dept of ECE, Sree
Vidyanikethan Engineering College, A. Rangampet,
Tirupati received B.Tech in Electronics and
Communication Engineering from SVEC, A
.Rangampet, Tirupati and M.Tech received from S V
University college of Engineering, Tirupati. Pursuing
Ph.D from JNTU, Anantapur in the field of Image

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 8


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Secure Image Transmission Based On Pixel


Integration Technique
A.D.Senthil Kumar T.S.Anandhi
Department of Instrumentation Engineering Department of Instrumentation Engineering
Annamalai University Annamalai University
Chidambaram, India Chidambaram, India
e-mail: dpsendil@gmail.com e-mail: ans.instrus@gmail.com

Abstract- A new image encryption algorithm based on pixel categories such as value transformation, pixel position
integration is proposed in this paper. This paper presents permutation, and chaotic systems [6]. There are two groups of
implementing security for multiple image that enables to select cryptography [7] image encryption algorithms (a)Chaos-based
one of the several images displayed simultaneously with a unique selective [8] and (b) Non-chaos selective methods or non-
security key for each image. This process involves encrypting the selective methods. It can be admitting, no particular encryption
images with Elliptic Curve Cryptography followed by applying algorithm which satisfies all image type requirements. An
block based interleaving and integrating the image matrix using Encryption Algorithm should be strong against all types of
the pixel based integration technique. Decrypting the image is attacks, including statistical and brute force attacks. Different
done with the key specifically generated for the image from
security algorithms [5] have been used to provide the required
multiple images. The proposed method provides a high level
security of interactive information requirements in the fields of
protection and many encryption algorithms have been proposed
military, confidential, aerospace, financial and economic, to enhance the image security.
national security and so on. The algorithm is evaluated by In this paper we presented, Elliptic Curve Cryptography
calculating the entropy and correlation values. [9], [10] algorithm is applied on the number of input images to
generate encrypted image and interleaving with pixel based
Keywords—Image Encryption; Elliptic Curve Cryptography; integration to enhance security, which is considered as
Pixel Grouping; Entropy; Correlation; Pixel Integration;Security; complex process, in particular, the information is two-
Digital Communication dimensional, due its size and redundant in nature. In our
I. INTRODUCTION proposed method, set of nine image inputs are resized to 64*64
and divided as Sub-block size of 32*32 is applied to encryption
Image security [1] is mainly considered in this paper. algorithm. Decryption process is done by selection of the key
Encryption is a common technique to increase the security of to retrieve the original image. Improved security level of the
image. Image and video encryption have vast applications. encrypted images can achieve by increasing the image entropy
With the emerging growth in digital technologies and value and to decrease the high correlation among pixels. The
improvement of computer networks, a large amount of digital results for entropy and correlation of the encrypted images are
data is being transferred. A huge part digital data transferred calculated and evaluated various image inputs.
over network either be private or confidential information. In
today’s digital communication [2]- [3], exchange of date over
networks presents certain risks factor, which requires the II. RELATED WORK
existence of appropriate security measures. For example, the Guiliang Zhu et al. [11] proposed image encryption algorithm
images are transmitted and can be copied or saved during their based on pixels. First, scrambling the image pixels, then
transmission without loss of image quality. During an through the method of watermark increasing the difficulty of
exchange images can be hacked in time of digital information its decoded. At last, choose a camouflaged image to vision or
storage and reproduce illegally. It is therefore necessary to the pixels of the true image, getting the final encryption image.
develop software for effective protection of transferred data The key parameters are encrypted by Elliptic Curve
Ągainst arbitrary interference. Data encryption with image Cryptography (ECC).
merging is very often the only effective way to meet these
Laiphrakpam Dolendro Singh et al. [12] proposed an image
requirements.
encryption using Elliptic Curve Cryptography based on pixel
Most of the security algorithms specifically designed to grouping to reduce the number of computation. The group of
encrypt digital data are proposed in the mid-1990s. Encrypt pixels are transformed into big integer single digits keeping in
and decrypt of images can be done by different encryption mind that it should not exceed ‘p’ value which is one of the
algorithms. Most of the traditional [4] public key standards parameter in elliptic curve equation of finite field. These big
algorithm such as Rivest Shamir Adleman (RSA), private key integer values are paired and given as input denoted by ’Pm’ in
encryption standard (DES) and the family of elliptic-curve- ECC operation. This operation helps us to ignore the mapping
based encryption (ECC), as well as the international data operation and the need to share mapping table between sender
encryption algorithm (IDEA), can be classified into several and receiver.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 9


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Ahmed Bashir Abugharsa et al. [13] proposed an traditional method of generation as the product of very large
encryption algorithm based on the rotation of the faces of a prime numbers.
Magic Cube. This process involves dividing the original image
into six sub-images and further these sub images are divided B. Image Encryption
into small blocks and attached to the faces of magic cubes. The encryption procedure is based on encrypting the image
intensity and thus converting it into a new intensity. This new
Rogelio Hasimoto Beltran et al. [14] proposed interleaving
scheme where the de-correlation process is applied not only at intensity is decrypted to obtain the original intensity.
a block, but also at a co-efficient or pixel level in the
compressed domain. 1. Read the image and find the intensity I from the
image intensity matrix.
Frank Dellaert et al. [15] proposed image-based tracking 2. Convert the intensity of the image I into an elliptic
algorithm, which relies on the selective integration of a small curve point E using Mapping-1.
subset of pixels that contain a lot of information about the state 3. Elliptic curve point from Mapping-1 Encrypted to a
variables to be estimated. new point(E').
4. The new point (E') is converted to a corresponding
III. PROPOSED METHOD integer M, using Mapping-2.
5. This integer M is used to calculate the new encrypted
intensity I.
C. Image Decryption
1. The decryption is done by reverse process of
encryption.
2. The encrypted image intensity I’ is read from the
received files.
3. The intensity I parameter is to calculate the integer M.
4. Integer M is converted to encrypted elliptic curve
point E’, using reverse mapping-2.
5. The encrypted elliptic curve point E’ is decrypted to
get the original point E.
6. By reverse mapping-1, the original intensity I is
obtained.
D. Image Interleaving
The encrypted image from ECC is divided as 4*4 sub-
blocks, for better security the block [16] size should be small.
These sub-block images are interleaved column wise. The
number of sub-block pixel values in each block is fixed for a
given interleaver [17]. The interleaver operation on a set of
image pixel values is independent of its operation on all other
sets of symbols Applying block based interleaving by selecting
the initial location randomly.
In 2D type of interleaves [16], [17], the idea of extending
from 1D prime interleaved into two dimensional is utilized.
The concept of proposed 2D prime block based interleaved is
follows: Consider the two dimensional interleaving of nr by nc
matrix. Firstly, we divide the interleaving scheme into column-
wise interleaving and row-wise interleaving. Secondly, we
assign the value of seed as column-wise seed and row-wise
seed to column-wise interleaver and row-wise respectively.
Figure. 1. ECC Algorithm with Pixel Integration
Therefore, the location of bits after interleaving will be as
follows.
A. Elliptic Curve Cryptography Algorithm
Elliptical curve cryptography (ECC) [3] function is based
on public key encryption technique based on elliptic curve
theory that can be used to create efficient cryptographic keys
which is smaller and faster. Key generated through the
properties of the elliptic curve equation instead of the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 10


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

This process is carried on for all the blocks of the image


and the results are noted. Image Integration is finally done by
summing up the pixel indices value for each pixel value for all
the images taken together.

Figure. 2. (a)Input Sequence block interleaver (b) Arrangements of


proposed block interleaver

Where prow and pcol are row and column-wise seeds. With
the new location of bits after interleaving on both row and
column-wise, the new locations are mapped back into 2D
interleaver to get the resulted interleaved bits in 2D.
Figure. 4 (a) Image Pixel value for 4*4 Image Size (b)Sub-blocks
image pixel value for (a)
E. Color Representation
16-bit Colour: Each pixel is represented [3] using 2 bytes or
16 bits. The bits are divided as red, blue and green each having
values i.e. 5-bits for red, 6-bits for green, and 5-bits for blue.

Figure. 3. 16-bit Color Representation

True Colour (24 bit): each pixel color is represented using 3


bytes, one for R, one for G, and one for B.
True Colour (32bit): An extra byte is used and rest same as
24-bit true color, usually referred to as the alpha component,
used to specify transparency.

F. Pixel Based Integration Technique


The input images are represented as pixel values ranging
from 0-255 in the mxm matrix. Create a pixel integration table
with the pixel values forming the columns with values 1-266
and the input images are taken along the rows. We consider the
colour depth of the images as 16-bit, thus dividing the image Figure. 5. Pixel Integration values
matrix into sub-blocks of 4x4.
For instance, considering a 64x64 image and dividing it
into 4x4 blocks will produce 16 blocks, each of 4x4 size. The
pixel integration table is created by assigning the pixel index to
the corresponding pixel value for the first block of every
image, later the second block of every image, and so on till the Figure.6. RGB values for multiple values in location
last block. In case of multiple indices with the same pixel value
in a block, the value is calculated by representing them in the
16-bit colour RGB palette and finding their corresponding
value.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 11


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

IV. EXPERIMENTAL DETAILS AND RESULTS


MatLab 8.6 is used to design the proposed algorithm using
(4)
windows environment with a system configuration of I7 Intel
Pentium VI Generation processor with 8 GB RAM. The H=Image Entropy
algorithm is tested with various image types. The important N=Input image (0-255) in gray level
factor that evaluates the efficiency of algorithms based on Pi=Probability of the occurrence of symbol i
encryption standard that should be strong against all types of
attack and the time required for overall process. Some
experiments are given in this section to demonstrate the
efficiency of the proposed technique.

A. Correlation Co-Efficient
The correlation [18] is calculated between input image and
encrypted image, which is called the correlation coefficient.
The values -1 to +1 ranges the correlation coefficient value.
The encrypted image and original image are totally different, if
the correlation value of encrypted image is equal to zero or
very near to zero.The encrypted image is a reverse of the
original image, if the correlation is equal to -1. Correlation
coefficients were calculated by using the equation (1),(2) and
(3),
Figure. 7. Texture Input Image

(1)
Where x, y is input image and encrypted image values of
two adjacent pixels in the image. In numerical computation, the
following formulas were used

\ (2)
The obtained correlation coefficient for encrypted image is
shown in Table II and III.
Figure. 8. Encrypted Image

(3)
B. Information Entropy
Entropy [19] is defined to express the degree of
uncertainties in the system. A secure encryption of encrypted
Figure.9. Decrypted Image with Key No.2
image should not provide any information about the original
image. Higher entropy images such as an image of heavily
cratered on the moon have a great deal of contrast from one
pixel to the next and consequently pixel cannot be compressed
as much as low entropy images. Entropy H indicated that each
symbol has an equal probability. The information entropy for
encrypted image is calculated using equation (4),

Figure.10. Decrypted Image with Key No.8

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 12


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE I. ENTROPY AND CORRELATION VALUE FOR MOSAIC


IMAGE TYPE

IMAGE ENTROPY CORRELATION IMAGE ENTROPY CORRELATION


NO Bee 0.1730 -0.0333
1 0.1950 -0.0006
Children 0.1730 -0.0263
2 0.1859 0.1818
Eye 0.1702 0.1425
3 0.1862 -0.1305
House 0.1700 -0.0513
4 0.1857 0.0489
Jesus 0.1703 0.0458
5 0.1861 -0.0089
Marilyn Monroe 0.1700 -0.0787
6 0.1859 -0.0861
Michael Phelps 0.1705 -0.1292
7 0.1893 -0.0131
Obama 0.1705 0.0860
8 0.1858 -0.0269
Shades 0.1776 0.0803
9 0.1866 -0.0145

Figure. 11. Mosaic Input Image


TABLE II. ENTROPY AND CORRELATION VALUE FOR TEXTURE
IMAGE TYPE

IMAGE ENTROPY CORRELATION


Cameraman 0.1808 0.0340
Chrysanthemum 0.1713 0.0192
Hydrangeas 0,2151 -0.0556
Jellyfish 0.1780 0.0128
Koala 0.1950 -0.0159
Lena 0.1711 -0.0393
Lighthouse 0.1792 0.0073

Figure. 12. Encrypted Image Penguins 0.2120 0.0808


Tulips 0.1708 0.0649

V. CONCLUSION
Digital image security has become highly important since
the communication by transmitting of digital products over the
network occur very frequently. The image encryption
algorithm is proposed, based on pixels interleaving with image
Figure.13. Decrypted Image with Key No.5 integration in this paper. First, interleaving the image pixels,
then through the method of pixel integration increasing the
difficulty of decoded. At last, a camouflaged image for all the
input images, getting the final encryption image. Experimental
result shows good performance with low correlation and high
entropy which shows that the pixel based algorithm is highly
secure. With this approach it is also able to encrypt large
volume of data more securely and simultaneously. Our new
approach is expected to be useful for transmission applications
Figure.14. Decrypted Image with Key No.6 and real time system. Future work includes the incorporation of
other encryption algorithm and extending the images to videos.
C. Result Analaysis
REFERENCES
High entropy value and low correlation values provides
[1] Christof Paar and Jan Pelzl,” Understanding Cryptography: A Textbook
good encryption. The time taken for encrypting and decrypting for Students and Practitioners," Springer, 2010,pp.1-24
mosaic image with key is 262.4035 seconds and for texture [2] Darrel Hankerson , Alfred J. Menezes and Scott Vanstone," Guide to
image is 614.3073 seconds. Elliptic Curve Cryptography,” Spinger,2004
Results for the correlation and the entropy values are shown [3] Chris Solomon and Toby Breckon,"Fundamentals of Digital Image
Processing,” Wiley,2010, pp1-18
in Tables I and II.
[4] I. Ozturk and I. Sogukpinar,"Analysis and comparison of image
encryption algorithm,”Journal of transactions on engineering, computing
and technology, pp.38, Dec 2004.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 13


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[5] Rinki Pakshwar, Vijay Kumar Trivedi and Vineet Richhariya,"A Survey Annamalai University, chidamabram, India and has put in 15
On Different Image Encryption and Decryption Techniques,"
International Journal of Computer Science and Information
years of service. She has produced one Ph.D and guiding 6
Technologies, pp.113-116, April 2013. Ph.D scholars. Her research interests are in power converters,
[6] G.Zhi-Hong, H.Fangjun, and G.Wenjie, "Chaos-base, Image Encryption control techniques for multiple connected power converters,
Algorithm,”Elsevier, pp. 153-157, Oct 2005. embedded controllers for power converters, renewable energy
[7] Norman D. Jorstad, "Cryptographic Algorithm Metrics,"Institute for based power converters. She is a life member of Indian society
Defense Analyses Science and Technology Division-Jan 1997. for technical education.
[8] Li. Shujun and X. Zheng,"Cryptanalysis of a chaotic image encryption
method,"IEEE International Symposium on Circuits and Systems,
ISCAS, May 2002.
[9] Moncef Amara and Amar Siad,"Elliptic Curve Cryptography and its
applications ",Systems, Signal Processing and their Applications
(WOSSPA),pp 247-250,May 2011.
[10] Kamlesh Gupta and Sanjay Silakari,"Performance Analysis for Image
Encryption Using ECC",Computational Intelligence and
Communication Networks (CICN),pp 79-82,Nov 2010
[11] Guiliang Zhu ,Weiping Wang ,Xiaoqiang Zhang and Mengmeng Wang
,”Digital Image Encryption Algorithm Based on Pixels”,IEEE 2010.
[12] Laiphrakpam Dolendro Singh and Khumanthem Manglem Singh,"Image
Encryption using Elliptic Curve Cryptography"Eleventh International
Multi-Conference on Information Processing,pp 472-481,2015.
[13] Ahmed Bashir Abugharsa, Abd Samad Bin Hasan Basari and Hamida
Almangush,"A Novel Image Encryption Using an Integration Technique
of Blocks Rotation Based on the Magic Cube and the AES Algorithm,"
International Journal of Computer Applications, pp.38-45, March 2012.
[14] Rogelio Hasimoto-Beltran and Ashfaq Khichari,"Pixel Level
Interleaving scheme for Robust Image Communication," Scalable and
Parallel Algorithm Labs, University of Delaware, Newark, Oct 1998.
[15] Frank Dellaert and Robert Collins, "Fast Image-Based Tracking by
Selective Pixel Integration,"Computer Science Department and Robotics
Institute Carnegie Mellon University,Pittsburgh,Sep 1999
[16] Hanpinitsak and C. Charoenlarpnopparut,"2D Interleaver Design for
Image Transmission over Severe Burst-Error Environment,"
International Journal of Future Computer and Communication, pp.308-
312, Aug 2013.
[17] Shengyong Guan, Fuqiang Yao and Chang Wen Chen,"A novel
interleaver for image communications with theoretical analysis of
characteristics," Communications, Circuits and Systems and West Sino
Expositions,IEEE 2002 International Conference (Volume:1), July 2002.
[18] Satoru Yoneyama and Go Murasawa, "Digital Image Correlation,"
Encyclopedia of Life Support Systems, Digit Imaging. 2008 Sep.
[19] Du-Yih Tsai, Yongbum Lee and Eri Matsuyama,“Information Entropy
Measure for Evaluation of Image Quality", 2008 Sep.

AUTHORS PROFILE

A.D.Senthilkumar was born in 1983 in


Thanjavur. He has obtained B. E [Electrical
and Electronics]in 2005 from Kings College
of Engg and M.Tech [VLSI Design] degrees
in 2011from Satyabama University. He is
currently working in Vistronics Design
Solutions as Design Engineer involving in Design and
Verification.

T.S.Anandhi was born in 1974 in


Chidambaram. She has obtained B.E
[Electronics and Instrumentation] and M.E
[process control and Instrumentation] degrees
in 1996 and 1998 respectively and then Ph.D
in power electronics in 2008 from Annamalai
University. She is currently Associate
professor in the Department of Instrumentation Engineering at

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 14


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Recognition of Gait in Arbitrary Views using Model


Free Methods
M.Shafiya Banu, M.Sivarathinabala, S.Abirami
Department of Information science Department of Information science Department of Information science
and Technology, and Technology, and Technology,
Anna University, Chennai. Anna University, Chennai. Anna University, Chennai.
svp.research14@gmail.com sivarathinabala@gmail.com abirami_mr@yahoo.com

Abstract— Gait feature is a relatively useful biometric for and important tasks in gait recognition. Two different families
application in security surveillance, because it can be obtained by of approaches for gait recognition have been proposed:
a camera just kept at a long distance away without disturbing appearance-based and model-based methods. Appearance-
that person. Gait is robust feature even if people change their based approaches use captured image sequences directly to
appearance, on the contrary appearance features are helpless in
those situations. The problem of multi-view human gait
extract gait features, while model-based methods extract
recognition along any straight walking paths is investigated in model parameters from the 2 images. Several methods have
this work. It is observed that the gait appearance changes as the been proposed for gait recognition from various different
view changes, while a certain amount of correlated information perspectives. The state of the art methods mainly fall into
exists among different views. This work is considered as a three categories namely, view- invariant models estimation,
classification problem, where the classification features are the viewing angle rectification and view transformation model
elements of the transformation matrix that is estimated by the methods. One example that estimates the view- invariant
Transformation Invariant Low-Rank Texture (TILT) algorithm. model is the Joint Subspace Learning (JSL) method, where the
Later on the gallery gait appearances are converted to the view of view-invariant model is represented by a weighted sum of
the probe subject, where the spatially neighboring pixels of the
gait feature are considered as the correlated information of the
sufficiently small number of prototypes of the same view.
two views. Finally, a similarity measurement is applied on the Similarly, a distance metric is learned with a good
converted gait appearance and the testing gait appearance. The discrimination ability based on the clustered and averaged
proposed method is tested on the CASIA-B multi-view gait GEI. However, this kind of methods can achieve good
databases to examine how the proposed gait recognition method recognition results only for similar views. When the probe gait
performs under most views. sequences are significantly different from the gallery
sequences, they usually have poor performances.
Keywords-component; formatting; style; styling; insert (key
words) The main disadvantages of appearance features can
be categorized as follows: 1) Due to the change in view angles
I. INTRODUCTION and illumination, appearance of people changes largely. 2)
Color can get distorted by several camera parameters that may
Gait is one of the well recognized biometric features result in the variation of the same color as a different color in
to re-identify a human at a distance from the camera. A large different cameras. 3) The Peoples clothing condition may also
number of successful gait recognition techniques have been vary according to the season and occasion, which may cause
continuously contributed. However, appearance changes of variation in the way people, appears. The main objective of
individuals due to viewing angle changes cause many this work is to design a system for gait recognition using
difficulties for 2D appearance-based gait recognition. This model free methods. For recognition purpose, the effective
situation cannot be easily avoided in a practical surveillance gait feature GEI is used. The walking path image of the gait
application. There are three major approach categories to solve sequence is constructed. The low rank textures for the features
the problem, namely; 1) seeking a gait feature invariant to are obtained using the TILT algorithm. The features obtained
view change; 2) reconstructing gait under any viewing angle are classified to make the proposed method more
using 3D structure information, which is obtained by discriminative and effective.
calibration; 3) projecting gait feature from one viewing angle
to the other by using view transformation. Gait features of
uncooperative subjects may contain covariates that influence II. RELATED WORKS
the gait itself and/or the appearance of a walking person, so Chen et al. [1] proposed a novel cross-view gait
the robustness to such covariates is quite important for recognition method based on projection of Gravity Center
accurate gait recognition. Among the covariates, a change in Trajectory (GCT). The coefficients of 3-D GCT in reality to
view occurs frequently for real situations and has a large different view planes for the complete view variation are
impact on the appearance of the walking person. Matching gait projected. The view of a silhouette sequence is estimated using
across different views is therefore one of the most challenging this matrix to complete view variance of gait features.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 15


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Calculation of the body part trajectory on silhouette sequence (FDFP), which is robust to high frequency noise. Zheng Liu et
is done to improve recognition accuracy by using correlation al.[10] proposed a framework consists of the hierarchical
strength as similarity measure.Finally nested match method is feature extraction and descriptor matching with learned metric
used to calculate the final matching score of two kinds of matrices.
features. Huang et al. [2] proposed the concept of extreme
Burhan et al.[11] used the concept of Multi-view gait
learning classification for regression and multi class
recognition using enhanced gait energy image and radon
classification. Both LS-SVM and PSVM can be simplified
transform techniques. A gait representation model for multi-
further and a unified learning framework of LS-SVM, PSVM
view gait recognition systems based on Gait Energy Image
and other regularization algorithms referred to Extreme
(GEI) and Radon Transform (RT) on human silhouettes. The
Learning Machine (ELM) can be built. Zhang et al.[3]
recognition of gait is based on similarity in measurements
proposed the low rank textures capture geometrically
using Euclidean distance. Zeng et al.[13] used the concept of
meaningful structures in an image, which encompass
silhouette based gait recognition through deterministic
conventional local features such as edges and corners as well
learning. The new silhouette based gait recognition via
as all kinds of regular, symmetric patterns ubiquitous in urban
deterministic learning is proposed in order to combine spatio-
environments and man-made objects. Worapan et al.[4]
temporal motion characteristics and physical parameters of a
proposed the Gait as one of the well recognized biometrics
human subject for recognition. Lu et al. [14] proposed a
that has been widely used for human identification. This paper
method based on joint distribution of motion angles is
proposes a new multi-view gait recognition approach which
proposed for gait recognition.
tackles the problems mentioned above. This method differs
from other by creating a so called View Transformation Model
(VTM) based on spatial-domain Gait Energy Image (GEI) by III. SYSTEM ARCHITECTURE
adopting Singular Value Decomposition (SVD) technique. To This section explains the system architecture for the
further improve the performance of the proposed VTM, Linear recognition of gait in arbitrary views by extracting features
Discriminant Analysis (LDA) is used to optimize the obtained from the surveillance videos.
GEI feature vectors.
A. Gait Recognition Systems for arbitrary views
Yibo Li et al. [5] proposed a gait recognition method The architecture of the system proposed in this work
based on ankle joint motion trajectory and bending angle. First consists of major components like feature extraction,
it obtains lower limb joint points according to each part of the transformation matrix generation and classification using ELM
body and height proportion. It obtains the position coordinates as shown in Figure 1.
of the toe by using skeleton algorithm. The feature vector is
made up of the relative velocity of ankle joint motion
trajectory and the bending angle. Support Vector Machine
(SVM) classifier and the Nearest Neighbor (NN) classifier are
used for the gait classification. Worapan et al.[6] proposed the
approach using regression-based View Transformation Model
(VTM) is proposed to address this challenge. Gait features
from across views can be normalized into a common view
using learned VTM(s). In principle, a VTM is used to
transform gait feature from one viewing angle (source) into
another viewing angle (target).
Sruti Das et al.[7] proposed the concept of a two-
phase View-Invariant Multi scale Gait Recognition method
(VI-MGR) which is robust to variation in clothing and Figure 1. Gait Recognition System for Arbitrary Views
presence of a carried item. Wei Zeng et al. [8] proposed the
method eliminates the effect of view angle for efficient gait The framework consists of feature extraction, transformation
recognition through deterministic learning theory. The width matrix generation using TILT algorithm and ELM
of the binarized silhouette models, the periodic deformation of classification. Feature extraction phase involves the extraction
human gait shape are selected as the gait feature. Makoto et of Gait Energy Image (GEI) feature. Gait Energy Image [12]
al.[9] proposed a novel gait recognition approach, which is generated by the summation of silhouette images to obtain
differs a lot from existing approaches in the subjects the spatio-temporal information. This process is followed by
sequential 3D models and his/her motion are directly the convex hull creation and construction of walking path
reconstructed from captured images. Arbitrary viewpoint image (WPI). TILT algorithm is used for transformation
images are synthesized from the reconstructed 3D models for matrix generation and then classification is done using ELM.
the purpose of robustness in gait recognition to changes in the
walking direction is proposed. Moreover, a gait feature is
proposed in this work named Frame Difference Frieze Pattern

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 16


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

B. System Description walking path information of the subject. This is because the
The description of each module based on the system TILT algorithm is much more computation efficient with a
architecture for the recognition of gait in arbitrary views by low rank image input.
model free method is explained in this section. This section C. Transformation Matrix Generation
includes the feature extraction process, usage of TILT
The walking path image is taken as the input for the
algorithm of transformation and finally classification with the
transformation matrix generation using TILT algorithm. The
help of ELM classifier.
main aim of this algorithm is to efficiently and effectively
The videos of large number of people are used as the
extract a class of low rank textures in a 3D scene from 2D
input for preprocessing phase. The obtained input video is
images. The low-rank textures capture geometrically
converted into frames and then background subtraction
meaningful structures in an image, which includes local
process is done to the frames in order to obtain the foreground
features such as edges and corners as well as all kinds of
images. The foreground images are the silhouette images
regular, symmetric patterns. Usage of Transformation
which are used as the input for further processing.
Invariant Low Rank Texture (TILT) algorithm [3] is to get the
Feature extraction involves reducing the amount of
low rank image of the walking path image. The coordinates
resources required to describe a large set of data. When
for this process is given graphically to the image during
performing analysis of complex data, one of the major
execution. The output produced by the TILT algorithm has
problems stems from the number of variables involved in the
been considered as features for the classification process. This
extraction. Analysis with a large number of variables generally
method can accurately recover both the low-rank texture and
requires a large amount of memory and computation power or
the domain transformation.
a classification algorithm which over fits the training sample
and generalizes poorly to new samples. Feature extraction is a D. Extreme Learning Machine (ELM)
general term for methods of constructing combinations of the The transformation matrix generated by the
variables to get around these problems while still describing Transformation Invariant Low Rank Texture (TILT) algorithm
the data with sufficient accuracy. The features which are is considered as the input for this module. The main objective
extracted for the entire process includes extraction of GEI, of Extreme Learning Machine (ELM) is that the weights of the
construction of convex hull followed by the creation of hidden layer can be initialized randomly, which makes the
Walking Path Image(WPI). optimization of the weights of the input layer and the biases of
1) Gait Energy Image the output layer. The time and accuracy of both training and
The input silhouettes obtained after the preprocessing testing is obtained as output after the classification. Extreme
stage is used as the input for the construction of Gait Energy Learning Machine (ELM) was originally developed for the
Image (GEI). It is obtained by the summation of all the single hidden layer feed forward neural networks and then
silhouettes of a person during the gait cycle divided by the extended to the generalized SLFN as per equation 3.1 which
total number of frames in that gait cycle. The resultant GEI is may not be neuron alike
of 240*320 dimensional vector.
2) Convex Hull (CH) Creation
The GEI obtained is used as the input for this Where h(x) is the hidden-layer output corresponding to the
process. The convex hull for the Gait Energy Image (GEI) is input sample x and is the output weight vector between the
obtained as the output. hidden layer and the output layer. One of the salient features
of ELM is that the hidden 18 layer need not be tuned.
Essentially, ELM originally proposes to apply random
computational nodes in the hidden layer, which are
independent of the training data. ELM and its variants mainly
focus on the regression applications. In ELM the weights
connecting inputs to hidden nodes are randomly assigned and
never updated. The ELM achieves good generalization
Figure 2. Convex Hull Figure 3 Sample Image after TILT
performance and solves the regression problem.
The formal definition of convex hull is that the IV. IMPLEMENTATION
smallest convex polygon S contains all the points of S as
This section discusses about the implementation of gait
depicted in figure 2.
recognition method in the arbitrary view. Here, the feature
3) Walking Path Image
extraction process involves extraction of GEI from the
The convex hull created for the Gait Energy Image is
silhouettes followed by the convex hull creation and
considered as the input for the construction of Walking Path
construction of walking path image. The transformation matrix
Image (WPI). The image representing the walking path of the
is obtained for WPI using TILT algorithm followed by the
subject can be constructed by averaging the gait sequence of
ELM classifier, which is then used for classification.
the subject. The main aim is to construct a relatively low rank
image with a sequence of gait images while preserving the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 17


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

A. Feature Extraction Walking Path Image (WPI) of the probe subject. The WPI
The videos of people’s gait obtained from the provides a relatively low rank version of image while
surveillance networks is converted into frames and reserving the walking path information of the probe subject.
background subtraction is done in order to extract the D. Transformation Matrix Generation
foreground images that is the silhouettes which is used as the
The Transform Invariant Low-Rank Textures (TILT)
input for the feature extraction process.
technique is used to estimate the transformation matrix for
The normalized silhouette is considered as input for this
transforming the WPI into a low rank image. The Walking
module. The silhouette sample obtained from the dataset as
Path Image (WPI) which reveals the walking path information
shown in figure 4.1 is taken as the input for this process. Gait
of the subject is given as input for the generation of low rank
Energy Image (GEI) Gait Energy image is the gait feature
textures as well as the transformation matrix. The angle for the
used in this system. An average of all the images over a gait
TILT of an image is specified by the user graphically by
cycle for a time period is called as gait energy image (GEI) as
marking the coordinate points of the image during the process.
denoted in equation 4.1. It represents the silhouette shape and
The transformation matrix obtained is considered as the
motion information in a single frame by maintaining its
features of different viewing angles and are used to train the
temporal information.
ELM classification model. The transformation matrix of the
WPI of the new coming subject is estimated and fed into the
trained classifiers so that its viewing angle can be estimated.
Where Nij is the number of frames and Sijt is the silhouettes. This transformation matrix describes the degree of difficulty
The following figure 5 depicts the gait energy image of a of the transformation. Moreover, subjects under the same view
person for a number of frames which is based on the persons have similar transformation matrix and subjects of different
gait cycle. GEI reflects major shapes of silhouettes and their views have distinct transformation matrix. According to the
changes over the gait cycle. It is referred as gait energy image TILT algorithm, any image can be transformed to its low rank
because of the following reasons: Each silhouette image is the version and the low rank image can be recovered from its
space-normalized energy image of human walking at this deformed image. Since the transformation matrix reveals the
moment. GEI is the time-normalized accumulative energy degree of transformation, it is able to be employed as the
image of human walking in the complete cycle(s).A pixel with feature for the classification process.
higher intensity value in GEI means that human walking
occurs more frequently at this position. E. ELM Classification
The transformation matrix generated after applying
B. Convex Hull Creation the TILT algorithm to the Walking Path Image (WPI) is stored
The Gait Energy Image (GEI) which is extracted by into file. The values in the file are being read for the
adding all the frames during a gait cycle divided by the total classification process which is done using Extreme Learning
number of frames from the input silhouettes is given as input Machines (ELM) classifier. The ELM classifier is used for
for the creation of convex hull. Then the GEI is converted into classification or regression purpose with a single layer of
binary image by setting a threshold value, the resultant hidden nodes, where the weights connecting inputs to hidden
obtained is used for further processing. The convex hull is nodes are randomly assigned and never updated. During the
depicted in figure 4. classification process the value of the image which is to be
tested is fed into the classifier using another such as test data.
The number of training classes used for the classification is 60
and the testing class is 6. The person is identified in 6 viewing
angles that is 18, 36,54, 72, 90, 108.
After classification using ELM the time and accuracy
of the training and testing data is obtained which is further
used for the recognition process. These models produce good
generalization performance and learn faster than networks
trained using back propagation. The simplest ELM training
Figure. 4. Convex hull and walking path of the image algorithm learns a model of the form
Yˆ = W2σ(W1x)
where W1 is the matrix of input-to-hidden-layer weights, σ is
C. Walking Path Image some activation function, and W2 is the matrix of hidden-to-
The image of the convex hull obtained from the GEI output-layer weights.
is given as input for the construction of Walking Path Image. The summarization of the ELM algorithm is as follows:
The upper and lower boundary of the WPI reveals the walking Given a training set, N = (xi,ti) | i = 1....N, kernel function f(x)
path of the subject and the left, right boundary reveals the and hidden neuron.
body movement which is not necessary for the estimation. Step 1: Select suitable activation function and number of
Therefore, the border protruding parts are cropped in order to hidden neurons for the given problem.
achieve a quadrangle like shape. The figure 4 denotes the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 18


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Step 2: Assign arbitrary input weight wi and bias bi , i= 1 . . .


H , Step 3: Calculate the output matrix H, H = f.(wx + b) at the
hidden layer. Step 4: Calculate the output weight β = H †T..
V. RESULTS AND DISCUSSION
The input for this entire process is obtained from the
CASIA-B gait database created by the institute of Automation,
Figure.6. Walking Path Image and TILT algorithm
Chinese Academy of Sciences (CASIA). The dataset contains
eleven views of 124 individuals with three conditions. The
conditions are: 1) The pedestrians appear with a bag 2) The Transformation Invariant Low Rank Texture (TILT)
pedestrians appear with coat 3) The pedestrians appear without algorithm is used to get low rank image of the walking path
any coat or bag The view angle contains 0, 18, 36, 54, 72, 90, image. According to this algorithm, any image can be
108,126, 144, 162 and 180. The probe set and the gallery set considered as a domain transformation of its low-rank version
can be generated by selecting samples from different views and the low rank image can be recovered from its deformed
just like from two different cameras. All the view angle pairs image. The above figure 6 depicts the tilt algorithm applied to
are used in the training to build a view-independent model. the walking path image where the red color boundary is the tilt
of the image.
A. Pre processing Technique
In MATLAB, the input image is read and is
converted into binary image. It is further used for processing.
The input silhouette from the dataset is shown in the figure 8.
The input silhouette from the dataset is converted into
binarized silhouette and gait energy image is constructed for
that input.
An average of all the images over a gait cycle is
called as Gait Energy Image (GEI). The total number of
silhouettes of a person during a gait cycle has to be divided by
the total number of frames to provide the GEI. The gait energy Figure.7. Rotations of Input
image obtained for the input silhouette is shown in figure 5.
The figure 5 is obtained by the average value for a person in a
gait cycle. The convex hull for the obtained Gait Energy
Image (GEI) is shown in figure 5. The convex hull is
represented as the sequence of points on the convex hull
polygon. The formal definition is that the convex hull of S is
the smallest convex polygon that contains all the points of S.

Figure.8. Low Rank textures


Figure.5. Input silhouette, Gait Energy Image, Convex hull
The various rotations of several degrees for the WPI which is
B. Walking Path Image
given as the input for the transformation matrix generation is
The image representing the walking path of the shown in figure 7. From the WPI the focused image is
subject can be constructed by averaging the gait sequence of obtained and it is set into the display range. The above figure 8
the subject. The main idea of walking path image, is to depicts the low rank texture obtained from the tilted image of
construct an image using gait image sequences while the walking path image.
preserving the walking path information of the subject. From
the convex hull, the upper and lower boundary reveals the C. Transformation matrix
walking path, and the left and right boundary reveals the body The transformation matrix generated as depicted in
movement which is not necessary for the estimation. figure 9 for the low rank textures is obtained using TILT
Therefore, the border protruding parts are cropped to get the algorithm. The transformation matrix reveals the degree of
walking path image. The walking path image for the convex transformation.
hull that is created from the GEI is obtained as shown in figure
11.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 19


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

proposed features for the entire process is consistent for using


as input to the TILT algorithm as well as in the gallery training
process, the testing process and the ELM classification which
is able to achieve good generalization performance. By using
TILT algorithm good conversion results are achieved for
similar views. This is because the similar views share more
correlated information than distant views. Future
enhancements for this thesis is the construction of the
extended appearance conversion method, which can achieve
better recognition result for distant views by taking advantage
of the fact that using two views can introduce more correlated
Figure 9. Transformation matrix information for appearance conversion.
D. Classification REFERENCES
The transformation matrix obtained using the TILT [1] X. Chen, T. Yang, and J. Xu, “Cross-view gait recognition based on
algorithm is considered as the input for the classification. human walking trajectory”, in Journal of Visual Communication and
Extreme Learning Machine (ELM) classification achieves Image Representation, vol. 25, no. 8, pp. 1842–1855, 2014.
better generalization performance by getting the smallest [2] G. B. Huang, H. Zhou, X. Ding and R. Zhang, “Extreme learning
machine for regression and multiclass classification”, in IEEE
training error and norm of output weights. The classification Transactions on Systems, Man, and Cybernetics, vol. 42, no. 2, pp. 513–
result gives the training time, testing time, training and testing 529, 2012.
accuracy by using the transformation matrix for classification [3] Z. Zhang, X. Liang, A. Ganesh and Y. Ma, “Tilt: transform invariant
process. The recognition result of the proposed system is low-rank textures”, in Computer Vision–ACCV Springer, pp. 314–
shown in figure 10. The transformation matrices obtained for 328,2010.
different views is classified using ELM classifier and finally [4] W. Kusakunniran, Q. Wu, H. Li and J. Zhang, “Multiple views gait
recognition using view transformation model based on optimized gait
recognition result is obtained. energy image”, in IEEE 12th International Conference on Computer
Vision, pp. 1058–1064,2009.
[5] Y. Li and Y. Guo, “Gait recognition method based on lower leg under
45 degree viewing angle of video”, in International Journal of Future
Generation Communication and Networking, vol. 6, no. 4, pp. 147–
156,2013.
[6] W. Kusakunniran, Q. Wu, J. Zhang and H. Li, “Gait recognition under
various viewing angles based on correlated motion regression”, in IEEE
Transactions on Circuits and Systems for Video Technology, vol. 22, no.
6, pp. 966–980,2012.
[7] S. D. Choudhury and T. Tjahjadi, “Robust view-invariant multi-scale
gait recognition”, in Pattern Recognition, vol. 48, no. 3, pp. 798–
811,2015.
[8] W. Zeng and C. Wang, “View-invariant gait recognition via
Figure 10. Person Recognition deterministic learning”, in Neurocomputing, vol. 175, pp. 324–335,2016.
[9] S. Sivapalan, D. Chen, S. Denman, S. Sridharan and C. Fookes, “3d
The overall process of the system developed is as follows: the ellipsoid fitting for multi-view gait recognition”, in IEEE 8th
gait feature GEI obtained from the input silhouette is used for International Conference on Advanced Video and Signal-Based
creation of convex hull. From the convex hull, walking path Surveillance, pp. 355–360,2011.
image is constructed. The view of the gallery is changed in [10] M. Shinzaki, Y. Iwashita, R. Kurazume and K. Ogawara, “Gait-based
person identification method using shadow biometrics for robustness to
accordance to the view of probe using the TILT algorithm. By changes in the walking direction”, in IEEE Winter Conference on
applying the algorithm, transformation matrix is generated. Applications of Computer Vision, pp. 670–677,2015.
The matrix generated is used for classification using ELM [11] Z. Liu, Z. Zhang, Q. Wu and Y. Wang, “Enhancing person re-
classifier for developing an enhanced multi-view gait identification by integrating gait biometric”, in Neurocomputing, vol.
168, pp.1144–1156,2015.
recognition system.
[12] I. M. Burhan and M. J. Nordin, “Multiview gait recognition using
VI. CONCLUSION enhanced gait energy image and radon transform techniques”, in Asian
Journal of Applied Sciences, vol. 8, no. 2, pp. 138–148,2015.
The system proposed in this work is a gait recognition method [13] W. Zeng, C. Wang and F. Yang, “Silhouette-based gait recognition via
for subjects walking in arbitrary straight directions. More deterministic learning”, in Pattern Recognition, vol. 47, no. 11, pp.3568–
specifically, the gait view of the subject is estimated. Then, the 3584,2014.
gallery gait appearance is converted to the estimated view [14] W. Lu, W. Zong, W. Xing and E. Bao, “Gait recognition based on joint
distribution of motion angles”, in Journal of Visual Languages &
using the TILT algorithm and finally classification is done Computing, vol. 25, no. 6, pp. 754–763,2014.
using ELM. The proposed method has been tested on the
CASIA-B multi-view gait database and achieved better
recognition results under most views. This is because the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 20


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Study on watermarking effect on different sub


bands in joint DWT-DCT based watermarking
scheme
Mohiul Islam
Department of Electronics & Communication Engineering
National Institute of Technology Silchar
Assam, India
email: mohiul292@gmail.com

Amarjit Roy Rabul Hussain Laskar


Department of Electronics & Communication Engineering Department of Electronics & Communication Engineering
National Institute of Technology Silchar National Institute of Technology Silchar
Assam, India Assam, India

Abstract— With the advent of technology, digital image applications. In this work, a blind watermarking scheme is
watermarking turns to be an effective technique for the presented. Based on the robustness of the algorithm,
protection of digital images against illegal use and duplication. watermarking can be classified into three categories: robust,
Watermarking in wavelet domain has drawn significant attention fragile and semi-fragile. A robust watermarking should be able
due to its multiresolution attributes. In this paper, analysis has
been presented on a digital image watermarking scheme that
to survive different kind of attacks. Watermarking can be
combines the features of Discrete Wavelet Transform (DWT) and performed either in spatial or frequency domain. However, this
Discrete Cosine Transform (DCT). A binary watermark image is study focused on developing digital image watermarking
embedded into certain sub-bands of a 3-level DWT transformed scheme in transform domains. Unlike spatial-domain
DCT coefficients of a host image. To attain maximum watermarking, transform domains are complex however gives
imperceptibility, all the 3-level DWT sub-band covering lower to better robustness. The transform domain method based on
higher frequency range are explored to find the optimal DWT discrete wavelet transform (DWT) [3–4], discrete cosine
sub-band suitable for embedding a binary watermark. transform (DCT) [5–6], discrete Fourier transform (DFT) [7–
Eventually, the same procedure as of the embedding process is 8], and singular value decomposition (SVD) [9–10] utilizes its
applied to extract the DCT middle frequencies in each sub-band
also. The watermark bit is then determined based on the
signal characteristics and human perception properties so as to
correlation between DCT mid-band coefficients and PN- attain better robustness and invisibility. Since the inception of
sequences and is again processed by the 2-D generated key to the very development, many watermarking algorithms have
obtain the actual watermark. been proposed based on a combination of all the aforesaid
transforms [11–13].
Keywords- Discrete Cosine Transform (DCT); Discrete Wavelet
Transform (DWT); Encryption; PN-Sequences; Watermarking. The DWT is a renowned transform for image processing
because of its multi-resolution properties in time and
frequency. The DCT gives tremendous energy compaction for
highly correlated image data. Using the combination of both
I. INTRODUCTION DWT and DCT for watermarking generally exhibits good
With the growing development of the internet and performance in regards of both the invisibility and robustness.
multimedia outfits over the past few decades, the access and However, apart from these two transforms, SVD is another
the unauthorized copying, modification and distribution of potential numeric tool that can be useful for applications like
digital data has become easier and widespread. The easy access data hiding and image compression. But it is not always
of multimedia data brings itself with the challenge of content possible to use SVD since most of the algorithm based on it are
protection. The increasing demand of copyright protection has non blind or semi blind in nature [14-15].
drawn the attention in digital watermarking [1-2], in which it Literature studies reveals that several research works have
embeds information for ownership claim and authentication been proposed to develop robust watermarking algorithms in a
into digital data to avert from illegal copying. Based on the combined domain involving the DWT, DCT and SVD. The
extraction algorithm, watermarking scheme can be broadly developed techniques differ not only in embedding targets but
classified into two distinct categories: non-blind or blind also in processing measures. DWT is used to decompose the
depending. In blind algorithm, the original image is not host image into LL, LH, HL, and HH sub-bands. Bhatnagar et
necessary for watermark extraction. Whereas in non-blind al. [16] has chosen the LL sub-band for watermark embedding.
algorithm, the original image is required during extraction Whereas, Ali et al. [17] has selected the LL and HH sub-band,
stage which may not be suitable for many practical while Laskar et al. [18] has considered the mid frequency LH
and HL sub-band. So, in this paper, thorough analysis and

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 21


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

extensive experiments have been performed to find optimal set ∗ 0 0


of bands or sub-bands in multi level wavelet decomposition = ......................(1)
∗ 1 1
which provides betters imperceptibility and robustness in this
context. Here α is the gain factor which is used to modify the
middle frequency DCT coefficients (4×4 blocks) of the
In this paper, a detail study and wide range of experimental selected DWT coefficient sets of the host image and is chosen
analysis for digital image watermarking have been performed as one of the parameter for performance comparison. Various
using different watermark for different class of host images. experiments are performed using different parameters like
The contributions of this work are as follows. The watermark is DWT sub-bands, gain factor α which are discussed in detail in
embedded on different set of sub bands to find the most the later sub section. The steps for embedding a binary
optimal sub-bands that provides better imperceptibility and watermark of size 32×32 in a gray cover image of size
robustness. The techniques have been tested with large 512×512 is performed similarly as described by R. H. Laskar.
database (300 gray images and 200 binary watermarks) to et. al [18] where embedding and extraction is performed on
check the efficiency of the algorithm for different types of sub-band 5 as shown in figure 1.
images. B. Watermark Extraction
The rest of this paper is organized as follows. The presented This joint DWT-DCT algorithm is a blind watermarking
watermarking algorithm is described in Section 2. Section 3 algorithm, so the original host image is not required to extract
describes the simulated results and performance evaluation of the watermark. During Extraction stage, correlation between
the presented system. The conclusions is given in section 4 mid-band coefficients and PN sequence are calculated to
respectively. generate the encrypted watermark. Then 2-D generated random
number which acts as a key are reused again to decrypt the
II. WATERMARKING ALGORITHM BASED ON JOINT DWT- watermark as described by R. H. Laskar, et. al [18].
DCT
C. Watermarking on different set of DWT sub-bands
In this section, the watermarking algorithm based on joint
An experiment has been performed to find the optimum set
discrete wavelet transform and discrete cosine transform is
of sub-bands suitable for embedding a (32×32) binary
discussed. Watermark embedding and watermark extraction are
watermark in the gray image of size (512×512). All together,
the two parts of algorithm. At first, using pseudo random
there are 64 number of 3-level DWT sub-bands, out of which
number generator, a random matrix having values 1's and 0's of
four sub-bands are required for embedding a 32×32 binary
size equal to that of the watermark is generated. Applying
watermark with this method. All the combinations of 3-level
exclusive or (Xor) operation between the original watermark
DWT sub-bands ranging from low frequency sub-bands to high
and the randomly generated matrix, a new encrypted version of
frequency sub-bands have been experimented. In the result
the watermark is generated. The randomly generated matrix
sections, the results are shown for the following 9(nine)
protects the algorithm as it performs the key role for the owner.
combinations of sub-bands. These nine set of sub-bands are
For the sake of expediency, this encrypted watermark is
chosen in such way that it can cover up different frequency
referred as watermark for the rest of the paper. The watermark
components ranging from lower frequency sub-bands to higher
embedding is performed on 3-level DWT transformed DCT
frequency sub-bands. These are as follows:
coefficients of a image.
¾ Sub-band 1: LL2/LL1/LL, HH2/LL1/LL,
A. Watermark Embedding
LL2/HH1/LL, HH2/ HH1/LL
For watermark embedding, first DWT is performed on the ¾ Sub-band 2: HL2/HL1/LL, LH2/HL1/LL,
host image and then DCT is applied on the selected DWT sub HL2/LH1/LL, LH2/LH1/LL
bands. In this analysis, 3-level DWT is mostly performed on ¾ Sub-band 3: HL2/LL1/HL, LH2/LL1/ HL,
the host image except for the last experiment where
HL2/LL1/LH, LH2/ LL1/LH
watermarking is performed using different decomposition
level. The selected 3-level DWT sub bands are divided into ¾ Sub-band 4: LL2/HL1/HL, LL2/LH1/HL,
4×4 blocks and DCT is performed on every 4×4 blocks. The LL2/HL1/LH, LL2/LH1/LH
number of bits in a watermark is equal to the total number of ¾ Sub-band 5: HL2/HL1/HL, HL2/LH1/HL,
4×4 DCT block. To embed a binary bit of watermark in the HL2/HL1/LH, HL2/LH1/LH
4×4 DCT block, four diagonal middle frequency coefficients ¾ Sub-band 6: HH2/HL1/HL, HH2/LH1/HL,
are used. These four diagonal middle frequency coefficients of HH2/HL1/LH, HH2/LH1/LH
DCT block are modified either by using PN0 or PN1 sequence. ¾ Sub-band 7: HL2/HH1/HL, LH2/ HH1/ HL,
For the two binary bits, '0' and '1' of watermark, PN0 and PN1 HL2/HH1/LH, LH2/ HH1/LH
sequence are used respectively. If P denotes a vector of mid- ¾ Sub-band 8: HL2/HL1/HH, LH2/HL1/HH,
band coefficients of the DCT transformed block and P' denotes HL2/LH1/HH, LH2/LH1/HH
a vector of the modified DCT coefficients of the same block ¾ Sub-band 9: LL2/LL1/HH, HH2/LL1/ HH,
after embedding, then the embedded sequence P’ is given by LL2/HH1/HH, HH2/ HH1/HH
Eq. 1.

Identify applicable sponsor/s here. (sponsors)

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 22


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 1. The different set of 3-level DWT sub-bands used for embedding a 32×32 binary watermark

In these sub-bands representation, LL, HL, LH and HH 8.1 based Matlab(R2013a). The hardware used is Dell
represent application, horizontal, vertical and determinant sub- computer with RAM 16 GB and Intel core i5 processor.
band of first level DWT decomposition respectively. Similarly,
LL1, HL1, LH1 and HH1 represent application, horizontal, There are total two set of experiments carried out for the
vertical and determinant sub-band of 2-level DWT joint DWT-DCT based image watermarking scheme.
decomposition respectively. Likewise, LL2, HL2, LH2 and 1. In this experiment, watermarking is performed on
HH2 represent application, horizontal, vertical and determinant different set of DWT sub-bands to find the sub-bands
sub-band of 3-level DWT decomposition respectively. A sub- suitable which gives better imperceptibility and
band HL2/HL1/HL implies the 3-level horizontal DWT sub- robustness (Experiment I).
band present in second level horizontal DWT sub-band which 2. The robustness of the algorithm has been observed
itself is present in first level horizontal DWT sub-band.
against various attacks.(Experiment II)
D. Performance against different attacks.
The performance of the algorithm against different The performance is measured in terms of visual observation
intentional and non-intentional attacks has been studied. The and peak signal to noise ratio (PSNR), mean square
attacks include various image processing attacks, noising error(MSE) and structural similarity index measure (SSIM) for
attacks, geometrical attacks, filtering, JPEG compression etc. both the watermarked image and the extracted watermark.
Image processing attacks comprise histogram equalization, Normalized cross-correlation is used to check robustness
contrast enhancement, image sharpening, gamma correction against various attacks. The PSNR and MSE are calculated as
and geometrical attacks include cropping, rotation, resizing, mentioned in [18].
translation etc. The experiment has been performed using sizes Instead of using traditional error summation methods, the
of watermark over a variety of host images. The performance is SSIM is designed by modeling any image distortion as a
measured in terms of normalized cross correlation (NC) combination of three factors that are loss of correlation,
between original watermark and extracted watermark. luminance distortion and contrast distortion [19]. The SSIM
III. RESULTS AND DISCUSSION index is calculated as discussed in [20].
The imperceptibility is calculated between original host
The joint DWT-DCT based watermarking scheme is tested image and watermarked image in terms of performance
on 300 gray images of 512×512 size and 200 binary measures like PSNR, MSE and SSIM. Similarly, Performance
watermarks of variable size. These images are collected from measures are compared between the original watermark and the
http://www.imageprocessingplace.com, USC-SIPI image extracted watermark for different values of α. From the
database and CVG-UGR image database. The binary experimental results and visual observation, it has been
watermarks are collected from the MPEG7_CE_shape observed that the performance of the algorithm for
descriptor database along with our collected database. Samples watermarking is satisfactory. Moreover, by choosing a suitable
of gray image of 512×512 size are shown in figure 2. Samples value of α, it has been observed that the algorithm works for
of binary watermarks are shown in figure 3. This image different variety of gray scale images. However, a trade-off has
watermarking technique has been tested for different standard to be made between the quality of the watermarked image and
images like Lena, Baboon, Barbara, biomedical images, Aerials the extracted watermark while choosing the value of α. Higher
image, texture image, Astronomical galaxies image etc. The value of α degrades the quality of the watermarked image but
detailed analysis and experimentation is performed in windows gives a better extracted watermark image and vice versa. In
result tables, HIGH indicates infinite PSNR value.
Identify applicable sponsor/s here. (sponsors)

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 23


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 2. Samples of gray test images collected from the Computer Vision Group at the University of Granada and USC-SIPI image database.

Figure 3.Samples of binary watermark.

A. Experiment I: Watermarking on different set of DWT sub- sub-band for the optimum gain factor α= 8. Similarly, the
bands SSIM between original watermark and extracted watermark
The embedding is performed on different combination of with sub-band is shown in figure 6. Here the graphs are shown
sub-bands but the representative result is shown for 9 different for five different cover images: Lena(standard image), Brick
set of sub-bands. The results are shown for the algorithm using wall(texture image), Chest X-ray(Biomedical Images), Galaxy
haar wavelet only. Table 1 and table 2 show the performance image and Aerial image. From PSNR and SSIM graph for
of the algorithm on watermarked image and extracted imperceptibility and robustness, it has been observed that sub-
watermark for different set of sub-bands respectively. In table band 8 which is high frequency diagonal sub-band, gives
1, the performance parameters MSE, PSNR and SSIM are optimum values of both the imperceptibility and robustness.
calculated between watermarked image and original image for Figure 7 shows the shows watermarked Lena image and
different values of gain factor α. Similarly table 2 shows extracted watermark for different sub-band for α= 5.
performance parameters between the original watermark and B. Experiment II: Robustness against different attacks
extracted watermark under no attack condition. From the result
In this experiment, the robustness of the scheme against
it can be said that imperceptibility increases when embedding
different intentional and non-intentional attacks have been
is performed in higher frequency sub-bands. In this experiment
studied. Eight different image processing operations are chosen
, it is found that sub-band 8 which is 3-level sub-bands present
as attacks to evaluate the robustness of the proposed scheme.
in high frequency diagonal bands are the most optimal sub-
These are: 1) histogram equalization (HE), 2) gamma
band in terms of imperceptibility and robustness. The sub-band
correction (GC) with 1.8, 3) JPEG compression with quality
8 gives highest average PSNR and SSIM values of 50.3090 dB
factor 90, 4) salt and pepper noise(SPN) (0.1%), 5) image
and 0.9996 respectively over 300 gray images. This is because
sharpening(IS) with radius 0.5 and amount 2.0, 6) Contrast
the high frequency determinant bands carries edge information
enhancement(CE) (default), 7) cropping(CR) one eighth from
and effect of watermarking is least on the watermarked image.
The algorithm gives poor performance if we embed the left side, 8)image resizing (RS)(512→ 256→ 512). This
watermark in low frequency approximation sub-bands. Sub experiment is performed on 300 gray images. The robustness is
band 1 lies in the low frequency LL band. Since most of tested by embedding different sizes of watermark. For the sake
images have higher low frequency contents, so the effect of of briefness, the results are shown only 10 gray images of size
embedding is more visible if we modify the coefficients of sub 512×512 where a binary watermark of size 32×32 is embedded
band 1 for embedding binary bits of watermark. So, it provides in sub band 8. Table 3 describes the performance of this
high value of MSE for sub band 1. Imperceptibility decreases algorithm in terms of normalized cross-correlation (NC).
with increasing gain factor α where robustness increases with Normalized cross-correlation is calculated in between extracted
increasing gain factor α. This experiment is performed for α watermark and original watermark under different attack
values ranging from 1 to 10 to find the most optimum values of condition. The performance of this algorithm is satisfactory
α in terms of imperceptibility and robustness. The optimum against image processing attacks like histogram equalization,
value of gain factor is found to be equal to 8 for this joint gamma correction, image sharpening and contrast
DWT-DCT based image watermarking scheme. The enhancement. But this algorithm fails to give adequate
imperceptibility of different watermarked images in terms of performance against JPEG compression, noising, cropping,
PSNR and SSIM is shown in figure 4 and figure 5 for different resizing etc.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 24


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE 1. PERFORMANCE COMPARISON OF DIFFERENT SET OF BANDS FOR THE WATERMARKED IMAGE FOR DIFFERENT VALUES OF Α
AVERAGED OVER 300 IMAGES
α= 5 α= 5 α= 5 α=8 α= 8 α= 8 α= 10 α= 10 α= 10
Set of bands
MSE PSNR SSIM MSE PSNR SSIM MSE PSNR SSIM

Sub-band 1 25.8516 35.8618 0.9225 25.9609 35.8161 0.9210 26.0533 35.7765 0.9196

Sub-band 2 6.5441 41.9086 0.9748 6.6462 41.9086 0.9689 6.7366 41.7627 0.9673

Sub-band 3 1.9439 47.0981 0.9993 2.0416 46.6856 0.9992 2.1367 46.3357 0.9992

Sub-band 4 1.8454 47.6016 0.9994 1.9411 47.1139 0.9992 2.0367 46.7162 0.9992

Sub-band 5 1.5548 48.2103 0.9995 2.1367 46.3357 0.9992 1.7498 47.2314 0.9993

Sub-band 6 1.4049 48.6122 0.9995 1.5068 48.0095 0.9994 1.5955 47.5640 0.9993

Sub-band 7 1.2277 49.3919 0.9996 1.3276 48.6476 0.9995 1.4202 48.1127 0.9994

Sub-band 8 0.7145 51.4232 0.9997 0.8174 50.3090 0.9996 0.9110 49.5921 0.9995

Sub-band 9 0.7514 51.1948 0.9997 0.8526 50.1388 0.9996 0.9447 49.4415 0.9995

TABLE 2. PERFORMANCE COMPARISON OF DIFFERENT SET OF BANDS FOR THE EXTRACTED WATERMARK FOR DIFFERENT VALUES OF
Α AVERAGED OVER 300 IMAGES.
α= 5 α= 5 α= 5 α=8 α= 8 α= 8 α= 10 α= 10 α= 10
Set of bands
MSE PSNR SSIM MSE PSNR SSIM MSE PSNR SSIM

Sub-band 1 25.8516 35.8618 0.9225 25.9609 35.8161 0.9210 26.0533 35.7765 0.9196

Sub-band 2 6.5441 41.9086 0.9748 6.6462 41.9086 0.9689 6.7366 41.7627 0.9673

Sub-band 3 1.9439 47.0981 0.9993 2.0416 46.6856 0.9992 2.1367 46.3357 0.9992

Sub-band 4 1.8454 47.6016 0.9994 1.9411 47.1139 0.9992 2.0367 46.7162 0.9992

Sub-band 5 1.5548 48.2103 0.9995 2.1367 46.3357 0.9992 1.7498 47.2314 0.9993

Sub-band 6 1.4049 48.6122 0.9995 1.5068 48.0095 0.9994 1.5955 47.5640 0.9993

Sub-band 7 1.2277 49.3919 0.9996 1.3276 48.6476 0.9995 1.4202 48.1127 0.9994

Sub-band 8 0.7145 51.4232 0.9997 0.8174 50.3090 0.9996 0.9110 49.5921 0.9995

Sub-band 9 0.7514 51.1948 0.9997 0.8526 50.1388 0.9996 0.9447 49.4415 0.9995

TABLE 5. NC VALUES OF EXTRACTED WATERMARK UNDER DIFFERENT ATTACK OVER 10 IMAGES


Image HE GC JPEG SPN IS CE CR RS
Lena 0.8909 0.9348 0.7875 0.7596 0.9697 0.8648 0.8325 0.4630
Baboon 0.8126 0.9314 0.7751 0.7608 0.9138 0.8448 0.8295 0.3959
Peppers 0.9380 0.9165 0.6777 0.7376 0.9751 0.8928 0.8335 0.4779
Cameraman 0.7870 0.7334 0.7663 0.7648 0.9503 0.8734 0.8372 0.4492
airplane 0.7478 0.9733 0.4593 0.7689 0.9505 0.8765 0.8315 0.4462
Galaxy 0.6466 0.9914 0.7814 0.7596 0.9838 0.8751 0.8186 0.4563
Aerial 0.5341 0.9160 0.7701 0.7608 0.8832 0.6257 0.8261 0.3732
Brick wall 0.5960 0.9806 0.7829 0.7376 0.9689 0.6784 0.8258 0.4173
Chest X-ray 0.8073 0.7604 0.6236 0.7648 0.9769 0.9092 0.8116 0.3999

head CT-
0.7566 0.3963 0.2293 0.7689 0.8642 0.8616 0.6943 0.2052
Vandy

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 25


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

(a) (b)

(c) (d)

Figure 4. PSNR (dB) of different watermarked image with sub-band for


α= 8.

(e) (f)

(g) (h)
Figure 7. Lena watermarked image and extracted watermark for different set
of sub-bands for α= 5.(a) Watermarked image for sub band 1,(b) Watermarked
image for sub band 3, (c) Extracted watermark for sub band 1, (d) Extracted
watermark for sub band 3, (e) Watermarked image for sub band 5, (f)
Watermarked image for sub band 8, (g) Extracted watermark for sub band 5,
(h) Extracted watermark for sub band 8.

IV. CONCLUSION
A detail analysis on joint DWT-DCT based digital image
watermarking scheme has been presented in this work. The 3-
level DWT transformed DCT coefficient is modified to embed
a watermark. This blind algorithm provides a maximum
Figure 5. SSIM of different watermarked image with sub-band for α= 8. average (over 300 images) imperceptibility of 50.3090 dB
when the watermark (32×32) is embedded in the most
optimum sub-bands of the host image. The sub-band 8 is
found as the optimal sub-band that is taken as diagonal sub-
bands of high frequency (HH) band. Both the subjective and
objective evaluation analysis show that the performance of the
watermarking algorithm is satisfactory to achieve
imperceptibility and robustness. With the increase of gain
factor, though imperceptibility decreases but robustness
increases and vice-versa. Gain factor α=8 provides the
optimum performance for both imperceptibility and robustness
in terms of quality measurement. It has been observed that
satisfactory performance using this algorithm is achieved
against various attacks likely histogram equalization, image
sharpening, contrast enhancement, gamma correction but fails
to achieve adequate performance against geometrical attacks,
JPEG compression, de-noising.
Figure 6. SSIM of extracted watermark for five different cover image with As an extension of this work, we will try to implement a
sub-band for α= 8.
robust system that can resists against geometrical attack, JPEG

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 26


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

compression, filtering etc keeping the imperceptibility in [13] J. Varghese, S. Subash, O. B. Hussain, K. Nallaperumal,
satisfactory level. Also this algorithm may be extended for M. R. Saady, and M. S. Khan, "An improved digital
embedding and extraction of a gray scale watermark in a color image watermarking scheme using the discrete Fourier
image. transform and singular value decomposition," Turkish
Journal of Electrical Engineering & Computer Sciences.
ACKNOWLEDGMENT
2015
The authors would like to acknowledge the all the members [14] O. Jane, and E. ELBAŞI, "A new approach of nonblind
of Speech and Image Processing Laboratory, Department of watermarking methods based on DWT and SVD via LU
Electronics and Communication Engineering, National Institute decomposition," Turkish Journal of Electrical
of Technology Silchar, India for providing support and Engineering & Computer Sciences. 2014 Sep
necessary facilities for carrying out this work.. 3;22(5):1354-66.
REFERENCES [15] J. M Shieh, D. C. Lou, and M. C. Chang, "A semi-blind
digital watermarking scheme based on singular value
decomposition," Computer Standards & Interfaces. 2006
[1] I. J. Cox, M. L. Miller, J. A. Bloom, and C. Honsinger, Apr 30;28(4):428-40.
Digital watermarking vol. 53, 2002. [16] G. Bhatnagar, and Q. J. Wu, "A new logo watermarking
[2] M. U. Celik, G. Sharma, E. Saber, and A. M. Tekalp, based on redundant fractional wavelet transform,"
"Hierarchical watermarking for secure image Mathematical and Computer Modelling. 2013 Jul
authentication with localization," IEEE Transactions on 31;58(1):204-18.
Image Processing. 2002 Jun;11(6):585-95. [17] M. Ali, C. W. Ahn, and P. Siarry, "Differential evolution
[3] N. Yadav, K. Singh, "An efficient robust watermarking algorithm for the selection of optimal scaling factors in
scheme for varying sized blocks," Turkish Journal of image watermarking," Engineering Applications of
Electrical Engineering & Computer Sciences. 2016 Jul Artificial Intelligence. 2014 May 31;31:15-26.
1;24(4). [18] R. H. Laskar, M. Choudhury, K. Chakraborty, and S.
[4] N. Muhammad, and N. Bibi, "Digital image Chakraborty, "A joint DWT-DCT based robust digital
watermarking using partial pivoting lower and upper watermarking algorithm for ownership verification of
triangular decomposition into the wavelet domain," IET digital images," In Computer networks and intelligent
Image Processing. 2015 Jun 17;9(9):795-803. computing 2011 (pp. 482-491). Springer Berlin
[5] M. Barni, F. Bartolini, V. Cappellini, and A. Piva, "A Heidelberg.
DCT-domain system for robust image watermarking. [19] A. Hore, and D. Ziou, "Image quality metrics: PSNR vs.
Signal processing," 1998 May 28;66(3):357-72. SSIM," InPattern recognition (icpr), 2010 20th
[6] S. D. Lin, and C. F. Chen, "A robust DCT-based international conference on 2010 Aug 23 (pp. 2366-
watermarking for copyright protection," IEEE 2369). IEEE.
Transactions on Consumer Electronics. 2000 Aug [20] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli,
1;46(3):415-21. "Image quality assessment: from error visibility to
[7] V. Solachidis, and L. Pitas, "Circularly symmetric structural similarity," IEEE transactions on image
watermark embedding in 2-D DFT domain," IEEE processing. 2004 Apr; 13(4):600-12.
transactions on image processing. 2001
Nov;10(11):1741-53. AUTHORS PROFILE
[8] A. Poljicak, L. Mandic, and D. Agic, "Discrete Fourier
Mohiul Islam has received his M.Tech
transform–based watermarking method with an optimal degree in 2014 from National Institute
implementation radius," Journal of Electronic Imaging. Technology Agartala and B.E degree from
2011 Jul 1;20(3):033008. Assam Engineering College in 2011.He is
currently pursuing Ph.D. in the Department
[9] R. Liu, and T. Tan, "An SVD-based watermarking of Electronics and Communication
scheme for protecting rightful ownership," IEEE Engineering at National Institute of
transactions on multimedia. 2002 Mar;4(1):121-8. Technology, Silchar. His research interests
[10] C. C. Lai, "A digital watermarking scheme based on include Image processing, Machine
Learning, Digital Signal Processing.
singular value decomposition and tiny genetic
E-mail: mohiul292@gmail.com
algorithm," Digital Signal Processing. 2011 Jul
31;21(4):522-7.
[11] C. C. Lai, and C. C. Tsai, "Digital image watermarking
using discrete wavelet transform and singular value
decomposition," IEEE Transactions on instrumentation
and measurement. 2010 Nov;59(11):3060-3.
[12] J. Guo, and P. Zheng, and J. Huang, "Secure
watermarking scheme against watermark attacks in the
encrypted domain," Journal of Visual Communication
and Image Representation. 2015 Jul 31;30:125-35.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 27


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Amarjit Roy has received his M.Tech


degree in 2014 from National Institute
Technology Silchar and B.Tech degree
from West Bengal University of
Technology(WBUT) in 2012. He is
currently pursuing Ph.D. in the Department
of Electronics and Communication
Engineering at National Institute of
Technology, Silchar. His research interests
include Image processing, Machine
Learning, Digital Signal Processing.
E-mail: royamarjit90@gmail.com

Rabul Hussain Laskar has completed his


PhD from National Institute of Technology,
Silchar, India and his M.Tech from Indian
Institute of Technology, Guwahati. He is
currently working as Head and Assistant
Professor in the Department of Electronics
and Communication Engineering at NIT
Silchar. His major research interests are in
Speech processing, Image processing,
Digital signal Processing.
E-mail: rabul18@yahoo.com

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 28


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Image Denoising using Hybrid of Bilateral Filter and


Histogram Based Multi- Thresholding With
Optimization Technique for WSN
H.Rekha P.Samundiswary
Research Scholar Assistant Professor
Department of Electronics engineering Department of electronics engineering
Pondicherry University Pondicherry University
Pondicherry, India Pondicherry, India
Saathvekha16@gmail.com samundiswary_pdy@yahoo.com

Abstract—The growth of real time image based applications in density level of AWGN over image. But the computational cost
Wireless Sensor Network (WSN) leads to the necessity of learning to implement the image denoising algorithm is yet to be the
the behavior of noises. The quality of the image is mainly problem. In particular, the transform based techniques [5]-[8]
determined by the amount of noise density present in the image. are succeeded to reduce the noise level with better image
Hence, in order to suppress the impact of noise level over image, quality at the cost of high energy consumption. Recent years,
image denoising is utilized. In this context, the implementation of spatial based techniques [9] - [15] are also introduced in the
the proposed denoising algorithm is done in two phases. The first field of denoising techniques to reduce the computation cost.
phase exploits the image denoising by applying bilateral filtering However the complexity has reduced to a certain extent only.
and the second phase utilizes the Histogram based Multilevel
Thresholding (HMT) with optimization technique. The HMT is In the meantime, the researchers are focused on meta-heuristic
mainly used to find the optimal thresholds and enhance the image optimization algorithms and incorporated them in many image
features and edges with fine tuning. Further, to reduce the processing applications in order to reduce the searching period
computation time, the harmony search algorithm based
as well to find the optimal solution [16][17]. Hence, Jonates
optimization technique is appended with the proposed algorithm
and tested for various noisy images with different entropies by
Lopes De Paiva and et.al [18] has made an attempt to
using the MATLAB simulation. hybridize the one of the optimization algorithms called genetic
algorithm with three different denoising methods such as
Keywords-Denoising; Entropy; Histogram; Multi-thresholding; Block Matching and 3D filtering (BM3D) [19], wiener-chop
Optimization [20] and anisotropic diffusion [21] to improve the denoising
quality with less complexity. Apart from that, Abhijit Chandra
I. INTRODUCTION and Sudipta Chattopadhyay [22] designed a better low pass
Recently, many applications including object tracking, filter for denoising along with the incorporation of the
forest monitoring, tele-medicine and traffic control and so on, Differential Evolution (DE) based optimization method to
choose WSN as a basic networking technology for evaluate the fittest optimal filter coefficients. Later A.
communication. The main advantage of using WSN is that it K.Bhandari and et.al [23] used the wavelet based adaptive
can be deployed anywhere without infrastructure and less thresholding technique with the differential evolution
computation cost. Despite the advantages of WSN, it has algorithm to eradicate the previous research problems like
limited power, less storage space and low communication edge smoothening and poor image quality. Though the above
range. But, when it comes to the image based applications, the methods show very good results in terms of image quality and
major factor affecting the image during sensing, transmitting
performance metrics such as Peak Signal to Noise Ratio
and receiving is noise. From the case study [1]-[4], it is
(PSNR), Image Quality Index (IQI), Mean Squared Error
understood that, there are several types of noises present during
transmission and reception of images using WSN. The noises (MSE) and Normalized Absolute Error (NAE), the time
are categorized in terms of their density level and required to compute the algorithm is large and this made the
characteristics with respect to the atmospheric condition and algorithm to not applicable for low power wireless networks
type of applications. The practical noises frequently raise like wireless sensor networks.
damage to the images are Additive White Gaussion Noise Hence, it is necessary to propose an efficient denoising
(AWGN), salt and pepper noise, shot noise and so on. Out of algorithm with less computation time for efficient AWGN
these kind of practical noises, the impact of the gaussion noise noise reduction. This paper concentrates on bilateral filtering
during image transmission produce severe problems than that based denoising technique along with histogram based multi-
of other practical noises. Until now, there are lots of researches thresholding instead of using the wavelet based method. In
prevailing in image denoising techniques which are trying to addition, a better optimization mechanism called harmony
find the best solution for suppressing the impact of noise search algorithm is appended with the proposed denoising

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 29


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

method to reduce the computational complexity without details. Hence to improve the quality of the image by
compromising the other performance metrics. preserving the edges without compromising the other metrics,
The rest of the paper is organized as follows: section II the HMT [27] with Harmony Search Algorithm (HSA) based
gives the brief description about the existing techniques. optimization algorithm is proposed in this paper. The working
Section III explains the concept of the proposed denoising principle of the proposed model can be explained by using the
method which includes the detailed introduction about the Fig.1.
bilateral filtering and histogram based multi thresholding with Generally the difference between the input image and its noise
optimization technique. Section IV deals with the discussion free output image shows the impact of the denoising method
about the comparison of proposed algorithm for different over noisy images. Bilateral filter is one of the nonlinear
optimization techniques with the existing in terms of their weighted averaging filters that utilize non-iterative technique
performance metrics. Finally section V, concludes the proposed
to reduce the noise density of the image by considering both
work based upon the result analysis and the future work is also
the spatial and the intensity distance of the neighboring pixels.
given in Section V.
Here each pixel value is replaced by its weighted average.
II. EXISITING TECHNIQUES USING OPTIMIZATION Mathematically, the outcome of the bilateral filter [9] at a
pixel location P is denoted as IF which is expressed by,
Denoising is one of the efficient ways to suppress the noise
over image. The noise model taken into consideration for most
of the denoising techniques is AWGN. For the past three = ∑ ∈ ‖ − ‖ | − | (1)
decades, there are different types of filters used in image
denoising like bilateral filtering, FIR filtering and stochastic Where,
filtering and so on to reduce the noise variance (σ). But Jing −Geometric closeness function
Tian [24] proposed an algorithm that, the wavelet based
−Gray level similarity function
algorithms are more effective to tackle the image denoising
problem than that of other filters. The selection of the w - Normalization constant
thresholds from the wavelet coefficients offers the noise free S - Spatial neighborhood of P
image. However it leads into an intra-scale dependency to ‖ − ‖ −Equalidean distance between P and q
estimate the value of the signal variance using neighboring
coefficients. So, to address the above mentioned problem, the Then the difference between the input image and the bilateral
homogeneous local neighboring coefficients are taken into output image is fed into the histogram based multi
account to solve the problem instead of using the whole thresholding to find the optimal thresholds. This can be
wavelet coefficients. This selection process is achieved by the described using the following steps,
Ant Colony Optimization (ACO) method [25]. The
performance of the ACO based denoising illustrated very good M=I−IF (2)
results in terms of PSNR, MSE and image quality. However
the running time of the ACO based denoising is more where, I is the original image and IF is the output of denoising
compared to that of existing wavelet based denoising operator called bilateral filter. The major application of the
techniques. Also the use of genetic algorithm in image bilateral filtering technique on the noisy image is to average
denoising [26] suffers in terms of computation time and
the noise level along with the image fine details to preserve
average fitness value when the population size varied from
minimum to maximum. the edges and ignores the outliers. In case of applying the
wavelet thresholding after the bilateral filter, the
On the other hand, Jonatas and et.al.,[18] examined the computational complexity is more along with the blurred
property of the denoising methods using different evolutionary texture which is not applicable for WSN. As a matter of fact,
algorithms and developed hybrid of genetic algorithm and the difference image M is partitioned by using histogram and
differential evolution to improve the performance of the the selection of the thresholds from each histogram bin will
existing denoising method such as BM3D, antisotropic determine the quality of the output image. If more number of
diffusion method and Wiener chop method. The computational thresholds (G0-GK) is chosen [28] then the sharpness of the
cost of the above method mainly depends upon the number of image is better i.e., the output of the reconstructed image in
iterations utilized to improve the performance of the denoising
enhanced.
method. Hence, from the existing researches, it is understood
that the trade-off between the computation time and the
performance metrics such as PSNR, NAE and IQI is G0= {(x, y) є M | 0≤ f(x, y) ≤ t1-1} (3)
complicated. G1 = {(x, y) є M |t1 ≤ f(x, y) ≤ t2-1} (4)
G2 = {(x, y) є M | t2≤ f(x, y) ≤ t3-1}............ (5)
III. PROPOSED DENOISING TECHNIQUE Gk= {(x, y) є M |tk ≤ f(x, y) ≤ L-1} (6)
In this paper, without using the wavelet thresholding, the
combination of the histogram based multi-thresholding and After separating the image into bins, average the pixels of
bilateral filtering is used to suppress the noise level over the each bin by using the entropy. The expected value of the
input image. Generally the effect of bilateral filter over noisy information H(X) using Shannon entropy[27] is expressed in
images is not perfect at the point of image edges and fine Eq.(7),

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 30


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

= −∑ ∗ log (7) This can be corrected by fixing the parameter values such as
pitch adjusting rate, harmony memory consideration rate and
To improve the performance and to reduce the computational bandwidth. The design and implementation of the HSA is
complexity of the entropy, the optimization algorithm is more efficient compared to that of the existing optimization
incorporated with the Shannon entropy. In this paper, the HSA algorithms like genetic algorithm, firefly algorithm and
is used as optimization to find the best threshold values of particle swarm optimization and also cuckoo search algorithm
each bin. Unlike other optimization methods, the HSA [30][31]. Finally, the output of the HMT (D) and the output of
[28][29] imposes fewer mathematical requirements and needs the bilateral filter (IF) are added to improve the image edge
less memory space with better accuracy. But the low sharpness and to preserve more image details without noise.
convergence rate and problem in local search performance
provide large processing time.

IF

Input Bilateral Denoised


image filter image (O)
D
(I)
M

Split the image Group the optimal


Find the thresholds
into number of thresholds to construct
of each bin using
bins using the denoisedimage
Entropy and HSA
Histogram

Fig 1 Proposed image denoising method using HMT with optimization

HSA: As inferred from the literature survey [28][29], the


IV. RESULT ANALYSIS parameters of the HSA used in the simulation are fixed as
The denoising technique for wireless sensor network has been mentioned in Table 2.
seriously affected by the large amount of energy consumption
due to its computational problem. As a matter of fact, this TABLE -2 Parameters of HSA
paper mainly concentrates on the analysis of the computation Parameter value
time of the proposed algorithm along with the other Harmony Memory Size 100
performance metrics like PSNR, NAE and IQI to reduce the Pitch Adjusting Rate 0.9
energy required to process the image. Before the incorporation
of the optimization algorithm with the proposed model, it is Harmony Memory Consideration Rate 0.95
necessary to understand the basic characteristics and BandWidth [0,1]
parameters of the optimization algorithm. The parameter
selection of the each optimization algorithm is given below, Particle Swarm Optimization (PSO): The PSO algorithm
[30][31] is one of the heuristic global optimization
DE: Differential evolution is a global optimization technique technologies, created in birds and fish group movement
and also an iterative population-based optimizer like other behavior research. The collaboration between the particle
evolutionary algorithms. Apparently, it took same searching individuals will help to search for the optimal solution. It sets
time for the different size of images. The choice of DE the simple behavior rule for each particle individual, so that
parameters listed in Table-1 is taken into consideration for the entire particle swarm show complex features. From the
simulation inferred from the paper [27]. literature review [30][31], the parameters of PSO is considered
for simulation which is mentioned in Table-3.
TABLE -1 Parameters of DE
Parameter value TABLE -3 Parameters of PSO
Differential weight(F) 0.5 Parameter value
Crossover probability(CR) 0.95 inertia weight(W) Wmax=0.9,Wmin =0.4
Population size(NP) 50 Acceleration factor (C) C1=C2=2
Lower and upper bound [0,255] Population size(NP) 100
Lower and upper bound [0,255]

ACO: By using the recent research analysis [24]-[27], the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 31


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

parameter selection of the ACO is done and mentioned in ∑ ∑ | , − . |


Table 4. ACO is a nature-inspired optimization technique. The =
∑ ∑ | , |
working principle of the ACO based optimization is obtained
(10)
from the natural behavior of real-world ant colonies. The
c) IQI: It can be calculated by using three factors namely loss
important collective behavior is the foraging behavior which
of correlation, luminance distortion and contrast distortion.
guides ants on short paths to their food sources, and basically
For a high quality reconstructed image, the value of IQI close
ants deposit the pheromone on the ground to mark some
to one.
favorable path that should be followed by other members of
the colony.
IQI =Corr(I,O)*Lum(I,O)*Cont(I,O) (11)

TABLE -4 Parameters of ACO The design of the denoising algorithm to be used for wireless
Parameter value sensor network mainly depends on their computation time.
Step size (L) 15 Hence, the computation time is taken as a primary selection
Pheromone information (α) 1 criterion for selecting better image denoising method. Here,
Heuristic information (β) 2
the proposed HMT based denoising method is analysed with
four optimization methods such as differential evolution,
Pheromone decay coefficient(ψ) 0.3
particle swarm optimization, harmony search algorithm and
ant colony optimization algorithms and compared with the
After the parameter selection of the optimization algorithm, proposed method to check the performance of the proposed
the proposed algorithm and the existing techniques are method using HSA based optimization technique. The
simulated by using MATLAB R2014. The number of computation time of the existing and the proposed denoising
iterations for all algorithms is fixed to 50. For the sake of the method are shown in Table-5. The overall running time of the
performance analysis of the proposed algorithm, two standard proposed algorithm with the combination of Shannon entropy
test images with different sizes are considered. LENA is very less when compared to that of the other optimization
grayscale image is considered as the first image with the size algorithms.
of 256×256 and another is a Boat image of size 512×512. The
TABLE-5 The average computation time of denoising using different
AWGN is added to the input images by distributing the image optimization comparison for LENA image (in seconds)
pixels using the gaussion random variable with zero mean and
variance [2]. The noisy input images and the denoised output Algorithm Run time
images for different gaussian noise density levels (σ) are
Existing algorithms
illustrated in Fig 2. The performance metrics [15-20] used to
ACO 29.98
judge the efficiency of the proposed method are PSNR, IQI,
PSO 143
MSE, NAE and computation time so on. Before the detailed
DE 16.11
analysis, a brief introduction about each metric is given below,
Proposed algorithm:
HSA with kapur 11.12
a) PSNR: PSNR is one of the best metrics mainly used to
HSA with Shannon 10.33
measure the quality of the reconstructed signal and it can be
HSA with fuzzy 11.21
calculated by finding the ratio between the maximum signal
power to noise power. The formula for calculating the PSNR
To improve the performance of the proposed algorithm
is shown in the below equation,
further, three different entropies such as Shannon, fuzzy and
kapur entropy are taken into account for analysis. Out of these,
PSNR=10log (8) the performance of the harmony search algorithm with
Shannon entropy provides less computation time than that of
where MSE- Mean Squared Error between the original and the other methods. From Table-5, it is clearly visible that the
reconstructed image, this can be formulated as, running time of the proposed denoising algorithm using HSA
with Shannon entropy is better than that of the other entropy
combinations. Hence for further analysis, this paper
1
= , − . concentrates only on the combination of bilateral filtering and
× the HMT using HSA with Shannon entropy.
(9)
To understand the overall performance of the proposed
b) NAE: It is a criterion to evaluate the ability of preserving denoising algorithm with optimization technique, each
the information of the original image. The quality of the image algorithm has taken 100 numbers of trials. Also the proposed
can be determined from the NAE value. If it is low, the image optimization technique is tested with various entropies to
quality is good otherwise the image is not suited for further improve the sharpness of the image edges. It is inferred
processing. It is defined as follows: through the Table 6 and Table 7 that the proposed denoising

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 32


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

method with HSA has low NAE and performs well by techniques with less average computation time. Finally, the
eliminating the noise and also restores the image quality overall simulation results prove that the proposed algorithm is
comparatively better than other approaches. efficient and more appropriate for wireless sensor network in
Moreover, the comparison table shown in Table-7 depicts that terms of less running time and better image quality which is
the proposed algorithm achieves approximately more or less shown in Fig.2.
equal values of the PSNR and IQI as that of existing
TABLE 6 Performance comparison of the proposed denoising method with optimizing technique for different entropies (Boat image)

Noise level σ = 10 σ = 20 σ = 30

HSA with HSA with HSA with HSA with HSA with HSA with HSA with HSA with HSA with
Metric/
fuzzy shannon kapur fuzzy shannon kapur fuzzy shannon kapur
Optimization
entropy entropy entropy entropy entropy entropy entropy entropy entropy

PSNR 30.69 30.758 30.625 28.399 28.263 28.694 26.63 26.184 26.189
NAE 0.0197 0.0203 0.0192 0.0260 0.0251 0.0278 0.0355 0.0314 0.0327
IQI 0.9896 0.9896 0.9895 0.9804 0.9806 0.9818 0.9663 0.9661 0.9645

TABLE 7 Performance comparison of the different optimization based denoising techniques with the proposed denoising algorithm
I
m
a Noise level σ = 10 σ = 20 σ = 30 σ = 40
g
e
Metric/
Optimizatio ACO DE HSA ACO DE HSA ACO DE HSA ACO DE HSA
B n
o
a PSNR 31.47 30.70 30.76 28.49 28.78 28.26 26.39 26.21 26.19 24.11 24.36 24.38
t NAE 0.036 0.018 0.020 0.054 0.028 0.025 0.068 0.032 0.0327 0.082 0.038 0.039
IQI 0.992 0.989 0.989 0.980 0.982 0.981 0.966 0.966 0.965 0.943 0.947 0.947
Metric/
Optimizatio ACO DE HS ACO DE HS ACO DE HS ACO DE HS
Le n
n PSNR 32.79 32.49 33.48 29.87 28.27 30.15 27.6 25.62 27.39 25.98 24.75 26.12
a
NAE 0.04 0.017 0.017 0.062 0.026 0.025 0.080 0.032 0.033 0.097 0.046 0.045
IQI 0.995 0.992 0.996 0.988 0.984 0.990 0.98 0.970 0.975 0.970 0.962 0.977

Noise density(σ) 10 20 30 40
Input image-1

Input image-2

Fig. 2 Input Boat image with different noise levels

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 33


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[8] Lahcene Mitiche, Amel Baha Houde, Adamon-Mitiche and Hilal


V. CONCLUSION Naimi," Medical image denoising using Dual tree complex thresholding
wavelet transform", Proceedings of IEEE conference on applied
electrical and computing technologies, Jordon, pp.1-5, 3-5 Dec 2013.
Wireless sensor network is one of the promising platforms for
[9] Ming Zhang and Bahadir K. Gunturk, "Multi resolution Bilateral
variety of applications such as military based applications to Filtering for lmage Denoising," IEEE Transactions on lmage Processing,
health-oriented applications. However transmission of image vol. 17, no. 12, pp.2324-2333, December 2008.
content through WSN has several bottlenecks including [10] Sudipta Roy, Nidul Sinha & Asoke K. Sen, "A New Hybrid Image
domination of noise, limited bandwidth and low power. Denoising Method," International Journal of Information Technology
and Knowledge Management, vol 2, no. 2, pp. 491-497.July-December
During transmission and reception, the concentration of the 2010.
noise in image causes severe problems. In general, the existing [11] R.D. da Silva, R. Minetto, W.R. Schwartz, H. Pedrini,"Adaptive edge-
denoisng techniques utilize the bilateral filter alone. However preserving image denoising using wavelet transforms", Pattern Analysis
the denoising output is not perfect interms of edge smoothing and Applications, vol.16, pp.567-580, 2013.
and fine details processing in the image. In this paper ,the [12] S. G. Chang, B. Yu, and M. Vetterli, "Adaptive wavelet thresholding for
denoising technique using the combination of bilateral image denoising and compression," IEEE Transactions on Image
Processing, vol. 9, no. 9,pp. 1532-1546, Sep. 2000.
filtering and histogram based multi-thresholding with the aid
[13] Elad, M.; Aharon, M. "Image denoising via learned dictionaries and
of optimization algorithm has been proposed to reduce the sparse representation", Proceedings of IEEE Computer Vision and
imapact of noise over image. Further, the HMT with excellent Pattern Recognition, June 2006.
features such as image enhancement and simple in [14] Tomasi C, Manduchi R.," Bilateral filtering for gray and color
construction is combined with the bilateral filter to make this images",Proceedings of the International Conference on Computer
Vision,pp.839–846,1998.
algorithm suitable for image denoising. Moreover the main
[15] Srinivasan KS, Ebenezer D, "A new fast and efficient decision-based
objective of the proposed method is to reduce the effect of the algorithm for removal of high-density impulse noises", IEEE Signal
gaussion noise without affecting the image features. The Processing Letters.Vol.14, issue-4, pp.189–192, 2007.
optimization algorithm namely harmony search algorithm [16] C. Toledo, L. de Oliveira, R. Dutra da Silva, H. Pedrini,"Image
added with the proposed denoising algorithm to reduce further denoising based on genetic algorithm", IEEE Congress on Evolutionary
the running time of the proposed denoising algorithm. The Computation, pp.1294-1301, 2013.
performance metrics such as PSNR, IQI and NAE of the [17] S. Kockanat, N. Karaboga, T. Koza,"Image denoising with 2-D FIR
filter by using artificial bee colony algorithm", International Symposium
proposed method has been tested with different images. In on Innovations in Intelligent Systems and Applications, pp.1-4, 2012.
general the NAE should be low and IQI should be nearly [18] Jonatas Lopes de Paiva, Claudio F.M.Toledo and Helio Pedrini," An
equivalent to one for good quality reconstructed image. From approach based on hybrid genetic algorithm applied to image denoising
the simulation results, it is evident that the proposed denoising problem", Applied Soft Computing, vol.46, pp.778-791,October 2015.
algorithm using HSAwith shannon entropy satisfied both NAE [19] K. Dabov, A. Foi, V. Katkovnik and K. Egiazarian,"Image denoising
and IQI metrics with better PSNR value. Future work may be with block-matching and 3D filtering", SPIE Electronic Imaging:
Algorithms and Systems, vol. 6064 , 2006
carried out by properly selecting the optimum values of HSA
[20] S. Ghael, E.P. Ghael, A.M. Sayeed, R.G. Baraniuk,"Improved wavelet
based parameter. The work can be further extended by denoising via empirical wiener filtering",Proceedings of SPIE, San
incorporting dynamically varying parameters instead of the Diego, CA, USA, vol. 3169 , pp. 389–399,1997.
fixed parameters in the optimization algorithm with the [21] M.J. Black, G. Sapiro, D.H. Marimont, D. Heeger,"Robust anisotropic
proposed denoising method. diffusion",IEEE Transactions on Image Processing, vol.7,no.3, pp. 421–
432,1998.
REFERENCES [22] Abhijit Chandra and Sudipta Chattopadhyay,"A new strategy of image
denoising using multiplier-less FIR filter designed with the aid of
[1] E.R.McVeigh, R.M.Henkelman, and M.J.Bronskill, , "Noise and differential evolution algorithm", Multimedia Tools and Applications,
filtration in magnetic resonance imaging," Medical Physics, vol. 12, vol.75,pp.1079-1098, November 2014.
no.5. pp. 586-591. 1985.
[23] A.K.Bhandari, D.Kumar, A.Kumar and G.K.Singh,"Optimal sub-band
[2] R.C. Gonzalez and R.E. Woods, Digital Image Processing. 2nd ed. adaptive thresholding based edge preserved satellite image denoising
Englewood Cliffs, NJ: Prentice-Hall; 2002. using adaptive differential evolution algorithm", Neurocomputing,
[3] Marpe,D.,Cycon,H.L.,Zander,G.,Barthel,K.-U,"Context-based denoising vol.174, pp.698-721, October 2015.
of images using iterative wavelet thresholding", Proceedings of SPIE on [24] Jing Tian, Weiyu Yu and Lihong Ma,"Ant shirnk: Ant colony
Visual Communications and Image Processing, vol. 4671, pp. 907–914 optimization for image shrinkage", Pattern Recognition Letters, vol.31,
.2002. pp.1751-1758, 2010.
[4] Buades, A., Coll, B., Morel, J.M.," A review of image denoising [25] Parthasarathy Subashini, Marimuthu Krishnaveni, Bernadetta Kwintiana
methods, with a new one", Multiscale Modelling and Simulation. vol.4, Ane and Dieter Roller, "Wavelet Based Image Denoising Using Ant
no.2, pp.490– 530, 2005. Colony Optimization Technique for Identifying Ice Classes in SAR
[5] Wenxuan, S., Jie, L., Minyuan, W," An image denoising method based Imagery", Proceedings of international conference on soft computing
on multiscale wavelet thresholding and bilateral filtering", Wuhan models in industrial and environmental applications-Advances in
University Journal of Natural Sciences. vol.15, no.2, pp.148–152, 2010. intelligent systems and computing series, Ostrava, Czech Republic,
[6] Ashish Khare and Uma Shanker Tiwary, "Daubechies complex wavelet vol.188, pp.399-407, 5-7 Sept.2012.
transform based technique for denoising of medical images", [26] Boudjelaba K, Ros F, Chikouche D,"An advanced genetic algorithm for
International journal of image and graphics, vol.07, no.04, pp.663-687, designing 2-D FIR filters", Proceedings of IEEE Pacific Rim Conference
Oct.2007. on Communications, Computers and Signal Processing, Canada, pp 60–
[7] Ghazel, M.,"Adaptive Fractal and Wavelet Image Denoising", 65, 23-26 Aug. 2011.
PhDthesis,DepartmentofElectrical&ComputerEngineering,University of [27] Sujoy Paul and Bitan Bandyopadhyay, “ A Novel Approach for Image
Waterloo, Ontario, Canada .2004. Compression Based on Multi-Level Image Thresholding Using Shannon

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 34


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Entropy and Differential Evolution”, Proceeding of the IEEE Students


Technology Symposium, IIT Kharagpur, West Bengal, India, pp.56-61, P. Samundiswary received her B.Tech degree and
Feb2014. M.Tech degree in Electronics and Communication
[28] Diego Oliva, Erick Cuevas, Gonzalo Pajares , Daniel Zaldivar and Engineering from Pondicherry Engineering College
Macro Perez-Cisneros, “ Multilevel Thresholding Segmentation Based affiliated to Pondicherry University, Pondicherry, India in
on Harmony Search Optimization”,Journal of Applied Mathematics, 1997 and 2003 respectively. She received her Ph. D
pp.1-24, 2013. degree from Pondicherry Engineering College affiliated to
[29] Ryan Rey M.Daga and John Paul T.Yusiong, “Image Compression Pondicherry University, Pondicherry, India in 2011. She has been working in
Using Harmony Search Algorithm”, International Journal of Computer teaching profession since 1998. Presently, she is working as Assistant
Science Issues, vol.9, no.3, pp.1-5, September 2012. Professor in the Department of Electronics Engineering, School of
[30] Das S, Abraham A, Konar A, "Particle swarm optimization and Engineering and Technology, Pondicherry Central University, India. She has
differential evolution algorithms: technical analysis, applications and nearly 18 years of teaching experience. She has published more than 70 papers
hybridization perspectives", Studies in Computational Intelligence (SCI), in national and international conference proceedings and journals. She has co-
vol.116, pp.1–38, 2008. authored a chapter of the book published by INTECH Publishers. She has
been one of the authors of the book published by LAMBERT Academic
[31] Hua J, Kuang W, Gao Z, Meng L, Xu Z, "Image denoising using 2-D
Publishing. Her area of interest includes Wireless Communication and
FIR filters designed with DEPSO", Multimedia Tools and Applications,
Networks, Wireless Security and Computer Networks. She will be available at
, vol. 69, pp 157–169 , March 2014.
samundiswary_pdy@yahoo.com

AUTHORS PROFILE

H.Rekha received her B.E degree in Electronics and


Communication Engineering from Bharadidasan
University, Tamilnadu, India in 2004 and M.Tech degree
in Electronic and Communication from Pondicherry
Engineering College affiliated to Pondicherry University,
Pondicherry, India in 2010. She is a student member of
IEEE. She worked as Assistant Professor in private
engineering college, Pondicherry, India. At present, she is
pursuing her Ph.D. degree in the dept. of Electronics Engineering from
Pondicherry University, Pondicherry, India. Her current research includes
Image Processing, Wireless Multimedia Sensor Network. She will be
available at saathvekha16@gmail.com

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 35


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Chaos Based Study on Association of Color with Music in


the Perspective of Cross-Modal Bias of the Brain

Chandrima Roy Souparno Roy1, Dipak Ghosh2


Department of Electronics &Communication Engineering 1
Researcher, 2Professor Emeritus
Heritage Institute of Technology Sir C.V. Raman Centre for Physics & Music
Kolkata, India Kolkata, India
chandrimaa.roy@gmail.com deegee111@gmail.com

Abstract— The relationship between color and music as part of for researchers. It was this thought that made none other than
the complex system consisting of visual and auditory domain has Sir Issac Newton curious enough to propose such a
not yet been systematically investigated. As both color and music correspondence in his book ‘Opticks’ back in eighteenth
had derived and evolved their forms from the nature that we, century. Since then, researchers have attempted to identify
humans, perceive through our senses and as both the forms are systematic links between music and color. Perhaps the most
processed in the same part of the human body, i.e., the brain, it
will not be a wildly invalid assumption that they share a
direct connection comes from the fascinating phenomenon of
similarity in perception. Needless to say that color and music music–color synesthesia. Studies also show that non-
both have strong impact on emotion and feelings & also a few synesthetic people also have music-to-color associations. So
studies have been reported in literature to explore causal why and how this music-color association works in human? It
relationship between color and emotion. This work reports a is strongly suspected that emotion plays a key role in
neuro-cognitive study on response of brain to different color mediating these two stimuli in the brain. That is to say, both
stimuli. Red, Green, Blue: three primary colors utilizing the color and music has similar emotional qualities that inspire
signals of electro-encephalograms and multi-fractal methodology arousal in a similar manner. Why so? The reason is: music and
to access the degree of complexity with the help of quantitative color have been found to instigate emotional arousal time and
parameter. In this study the correlation between emotional
arousal and the effect of audio and visual stimuli has been
again. The strong relation between music and emotion has
studied. This investigation explores the problem from a new been repeatedly reported in various studies. Music has been
perspective. 15 participants were asked to hear 6 different music shown to affect the emotional state across age, culture and
pieces (each of 30 second duration). The type of emotion elicited language boundaries. The mood a song induces is so reliable
by different music pieces were identified by the participants from that music is often used as a mood-inducer in psychological
a given collection of possible emotional responses. Then they are studies. Similarly, association of color with emotions is also
asked to assign a color associating the emotion from a given color reported in previous literatures. Not only this, the complexity
wheel (structured according to Munsell color system). Each color, in bio-signals using color as a visual stimulus has been found
associated with a particular music piece, is a mixture of specific to be higher than that of music. In light of these, this study is
Red, Green and Blue values (RGB triplet) and has a specific HEX
number (hexadecimal representation), which is recorded for each
conducted to find whether a quantitative correlation can be
response. Then, the musical pieces used were further zoomed found between music, color and emotion since both the stimuli
with the help of fractal technique to identify different emotions is connected to emotion in a consistent manner.
related to music in a quantitative measure. Here, to analyze the This study provides new data in regard to association of color
complexity of the sound signal (which are non-stationary and to emotion. Main outcomes may be summarized as follows:
scale varying in nature), we have used Multifractal detrended the weighted average of emotion ‘Joy’ is higher in clip 1
fluctuation analysis (MFDFA), which is capable of determining whereas, the emotion ‘anxiety’ is prevalent in clip 3 and 4.
multifractal scaling behavior of non-stationary time series. Also, participants reported ‘romantic’ emotion in clip 6. The
Hence, with the data collected, we can correlate color, emotion respective color choices made by participants show a
and music quantitatively.
particular trend in these music pieces. The average values of
Keywords-music, color, MFDFA, RGB triplet, Hexadecimal Green and Red are found to be higher in case of emotions- Joy
representation and Anxiety respectively. Again, the clip 6 (romantic) has an
average higher Red value than Green or Blue. But, comparing
I. INTRODUCTION it with the clip corresponding anxiety, we found that even
The correlation between color and music with effect causing though the Red value is high in both, anxiety is far more
emotional arousal has always been an integral part of interest pronounced and easily associated with high Red values. Same

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 36


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

result was found in both clip 1 and 2 (‘joy’ and ‘devotion’), well as socio-cultural background. The choice included both
both of which are associated with high green values, but musician and non musicians as well. The experimental set up
emotion joy has higher association with Green. was established in a laboratory of Jadavpur University in CV
During the next part of the experiment, i.e., analyzing the
Raman Centre for Physics and Music. The audio clips were
sound signals with MFDFA, the above result showed
consistency. Comparing multifractal width of both signals played for 20 seconds via a standard music player in a room of
having high Red values (clip 3+4 and clip 6) it has been found ambient temperature. Each clip was played with a span of gap
that clips corresponding to anxiety have relatively higher of around 2 minutes and the Munsell Color scheme was
complexity than the one corresponding to romantic emotion preceded by a grey landscape before starting each experiment.
(as was seen in case of average Red values for these clips). The time of the experiments is a normal working hour of any
Similarly, higher green value (between joy and devotion) gave weekday. The participants were expected to be mentally
higher multifractal width (indicating greater complexity).
poised, stable while labeling the emotions and choosing the
This research, although is in a very nascent form, attempts
to identify the correlation between color and music having colors so that the emotions can be identified neutrally.
emotion as the mediator. The psychological connection
between color and emotion has been identified using basic
Munsell color scheme (with RGB triplets and hexadecimal B. Nonlinear chaos based assessment of music samples
representation). Previous works used a limited number of color
variations, thus restricting the emotional component
considerably. Using a color wheel and array of color hues and
The collected music clips were analyzed using a robust
saturation, this work, on the other hand, gives freedom to
analysis method called Multifractal Detrended Fluctuation
associate color to emotion. Also, unlike previous works, this
includes the latest chaos based tools in non-linear methodology Analysis (MFDFA) based on chaos and fractals. Nature is
to verify the psychological data with the analysis of the non- essentially example of most of the complex phenomena,
stationary music stimuli, hence making the results more nonlinear in character - recent research has advocated the
reliable and rigorous in form. This study hopes to stir the nonlinearity of manmade complex systems like music. The
prevailing ideas and initiate further research in cross-modal data series has been exhaustibly studied confirming the
participation in the brain. nonlinearity of brain functions. In view of above, we have opt
the most vigorous approach proposed so far. The multifractal
II. EXPERIMENTAL DETAILS spectrum identifies the deviations in fractal structure within
This investigation explores the problem from a new time periods with large and small fluctuations. The
perspective. 15 participants were asked to hear 6 different multifractals are fundamentally more complex and describe
music pieces (each of 30 second duration). The type of time series featured by very irregular dynamics, with sudden
emotion elicited by different music pieces were identified by and intense bursts of high-frequency fluctuations [6]. MFDFA
the participants from a given collection of possible emotional technique has been widely applied in various fields ranging
responses. Then they are asked to assign a color associating from stock market to biomedical fields for prognosis of
the emotion from a given color wheel (structured according to diseases [7] [8].
Munsell color system). Each color, associated with a particular
music piece, is a mixture of specific Red, Green and Blue In [4], another extensive search shows application of different
values (RGB triplet) and has a specific HEX number variants of MFDFA technique in order to investigate various
(hexadecimal representation), which is recorded for each time series. The analysis shows that the calculated singularity
response. Then, the musical pieces used were further zoomed spectra are very sensitive to the order of the detrending
with the help of fractal technique to identify different
emotions related to music in a quantitative measure. Here, to polynomial used within the MFDFA method. The relation
analyze the complexity of the sound signal (which are non- between the width of the multifractal spectrum and the order
stationary and scale varying in nature), we have used of the polynomial used in calculation is evident. Furthermore,
multifractal detrended fluctuation analysis (MFDFA), which is type of this relation itself depends on the kind of analyzed
capable of determining multifractal scaling behavior of non- signal. Such an analysis can give us some extra information
stationary time series. Hence, with the data collected, we can about the correlative structure of the time series being studied.
correlate color, emotion and music quantitatively.
In reference [5], electroencephalography (EEG) was
performed on 10 participants using a simple acoustical stimuli
A. Self Responses i.e. a tanpura drone. Non-linear analysis in the form of
The specimen six pieces of music were standardized with a set Multifractal Detrended Fluctuation Analysis (MFDFA) was
of emotions varying in the range of joy, sorrow, serenity, carried out on the extracted alpha and theta time series data
anger, heroic, romantic, devotion, anxiety and freedom. 15 from the EEG time series to study the variation of their
different people comprising of both male and female people complexity. It was found that in all the frontal electrodes alpha
were selected for the test. They belong to varied educational as as well as theta complexity increases as is evident from the

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 37


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

increase of multifractal spectral width. This study is entirely Table 3: proportion of RGB coefficients of the colors for each audio clip (for
the chosen subject)
new and gives interesting data regarding neural activation of
the alpha and theta brain rhythms while listening to simple
acoustical stimuli. The importance of this study lies in the R 0.103 0.16 0.448 0.73 0.13 0.27
context of emotion quantification using multifractal spectral
G 0.434 0.38 0.283 0.11 0.36 0.36
width as a parameter as well as in the field of cognitive music
therapy. B 0.463 0.46 0.269 0.16 0.5 0.37

The extensive research in this paper reports a neuro-cognitive Figure 1: plot of RGB values in six different audio clips (for the chosen subject)
study on response of brain to different color stimuli. Red, Color RED
Green, Blue: three primary colors utilizing the signals of
electro-encephalograms and multi-fractal methodology to
access the degree of complexity with the help of quantitative
parameter. The main interesting findings are summarized in R
the result and discussion.
300
III. RESULTS AND DISCUSSIONS
200
As explained in the experimental setup, each subject reacted to
the different music stimuli by choosing the proper emotion(s) 100 R
in the given table by putting 1 as mark of reaction. The
choices of their color indexing the appropriate RGB values are 0
associated with every subject and every clip. The following 1 2 3 4 5 6
tables show one such specimen data of one subject out of the Color GREEN
15 dataset. The choice of subject is random, unspecific and
regardless to reaction. The percentage of the RGB constituents
have also been calculated and given below.
G
Table 1: choice of emotions in different audio clips (for the chosen subject)
300
250
CLIP CLIP CLIP CLIP CLIP CLIP
EMOTION 1 2 3 4 5 6 200
150
JOY 1 1 G
100
SORROW 50
ANGER 0
HEROIC 1 1 2 3 4 5 6

ROMANTIC 1 Color BLUE


DEVOTION

SERENITY 1 1
ANXIETY 1 1 300
B
OTHER
200

Table 2: values of RGB corresponding to the color chosen for different audio 100
clips (for the chosen subject) B
0
COLOR CLIP 1 CLIP 2 CLIP 3 CLIP 4 CLIP 5 CLIP 6 1 2 3 4 5 6
R 52 87 220 240 63 185
G 219 205 139 36 171 246
B 234 244 132 53 238 253

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 38


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

The HEX code associated with each chosen color has resulted
in the mentioned RGB values as tabulated in table 2. The
variation of weighted average for clip 2
proportion of their contribution is also identified in Table 3. It
is to e mentioned that the data set reported in the literature is a Series1
specimen data set which is acconpanied with 15such another
data set whose contribution in totality lead to the tablesand
figures following. We know that Red,Green and Blue are the 6.5
three primary colors which can result in any other color by
proportionate ratio of mixing. In the further execution, each 3.5
2 2.5
clip has been marked by the level of their emotion snad 1 0.5 0.5 0
0 0
associated values of RGB for 15 diferrent subjects. The 1 2 3 4 5 6 7 8 9 10
weighted average of different emotions has been found out
from their which has been listed below in Table 4. The series
of figures below that indicate the clip wise variation of the
weighted average as per the leeled emotions. This study
indicate an approach to identify quantified parameters from an variation of weighted average for clip 3
qualitative impression of different emotions (such as Joy or
Anxiety or Serenity). Series1

Table 4: weighted average of different emotions 8.5

4
cli 3
p
jo sorr ang her roma devot seren anxi othe 2
y ow er oic ntic ion ity ety rs 1
no 0 0 0 0 0
9. 1 2 3 4 5 6 7 8 9 10
1 0 0 2.33 2.33 0 0.5 0.5 1
33

2 0 3.5 1 0 0.5 6.5 2.5 0.5 0


variation of weighted average for clip 4
3 0 0 4 2 0 0 0 8.5 1
Series1
4 0 0 5 2 0 0 0 8 2
8

5
0.
1.83 0 1 5.5 0 0 0
5
6.83 4
83

3. 2 2
6 1.5 0 0.5 6 1 3.5 0 0
5
0 0 0 0 0
1 2 3 4 5 6 7 8 9 10
Figure 2: plot of the weighted averages for different emotions in respective
clips

variation of weighted average for clip 1 variation of weighted average for clip 5

9.33 Series1 Series1

6.83
5 5.5

1.83 1
2.33 2.33 0.83 0 0 0 0
1 0.5 0.5 1 1 2 3 4 5 6 7 8 9 10
0 0 0
1 2 3 4 5 6 7 8 9 10

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 39


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

variation of weighted average for clip 6

Series1
G
45

Average value
40
6 6 35
30
3.5 3.5
25
1.5 20
1
0.5
0 0 0 15
1 2 3 4 5 6 7 8 9 10 10
5
0
From the tables and datas recorded for the six different music 1 2 3 4 5 6
clips for sixteen subjects, the average values of RGB can be G 42.5 40.56 23.38 17.44 35.31 34
further investigated clipwise. These datas are tabulated in
Table 5 and their variations are shown in Figure 3. In the
analysis part, we will certainly see that the dominance of any
color (say Red) indicates any specific emotion or a group of
associated/adjoint emotions to be pronounced in certain
specific audio clips. And the complexity or spectral
dimension( multifractal widths) of those audio clips has been B
obtained by nonlinear technique, mfdfa, as mentioned in the
Average value
60
literature.The values of complexities associated with each clip
has been tabulated with associated emotion and dominant 50
color asper their weighted average and average values in the
table below. 40
Table 5: Clip wise average values of RGB
30
clip no R G B 20
1 25.9 42.5 31.5
2 31 40.5625 28.1875
10
3 50.8125 23.375 19.375 0
4 56.25 17.4375 19.8125 1 2 3 4 5 6
5 29.0625 35.3125 34.1875 B 31.5 28.19 19.38 19.81 34.19 56.88
6 34.625 34 56.875

Table 6: Clip wise indentified complexity in relation with emotion and color
Figure 3: clip wise variation of average values of R, G, B

clip SPECTRUM max appeared


no WIDTH emotion color
R
clip 1 0.571 JOY GREEN
average value

60
50 clip2 0.428 DEVOTION GREEN
40
clip3 1.031 ANXIETY RED
30
20 clip4 0.458 ANXIETY RED
10
0 clip5 0.471 SERENITY GREEN/BLUE
1 2 3 4 5 6
R 25.9 31 50.81 56.25 29.06 34.63 Clip6 0.383 ROMANTIC RED

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 40


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

IV. CONCLUSION ACKNOWLEDGMENT (HEADING 5)


This study provides new data in regard to association of color We are sincerely thankful to Ranjan Sengupta, Archi
to emotion. Main outcomes may be summarized as follows: Banerjee and Sankha Sanyal for their sincere support and
the weighted average of emotion ‘Joy’ is higher in clip 1 cooperation in the process of execution of this work.
whereas, the emotion ‘anxiety’ is prevalent in clip 3 and 4.
Also, participants reported ‘romantic’ emotion in clip 6. The
respective color choices made by participants show a REFERENCES
particular trend in these music pieces. The average values of
Green and Red are found to be higher in case of emotions- Joy [1] Moreno, S., & Bidelman, G. M. (2014). Examining neural plasticity and
and Anxiety respectively. Again, the clip 6 (romantic) has an cognitive benefit through the unique lens of musical training. Hearing
average higher Red value than Green or Blue. But, comparing research,308, 84-97.
it with the clip corresponding anxiety, we found that even [2] Müller, Matthias M., et al. "Processing of affective pictures modulates
though the Red value is high in both, anxiety is far more right-hemispheric gamma band EEG activity." Clinical
pronounced and easily associated with high Red values. Same Neurophysiology 110.11 (1999): 1913-1920.
result was found in both clip 1 and 2 (‘joy’ and ‘devotion’), [3] Koelstra, Sander, et al. "Deap: A database for emotion analysis; using
both of which are associated with high green values, but physiological signals." Affective Computing, IEEE Transactions on 3.1
emotion joy has higher association with Green. (2012): 18-31.
During the next part of the experiment, i.e., analyzing the [4] Oświęcimka, Paweł, et al. "Effect of detrending on multifractal
sound signals with MFDFA, the above result showed characteristics."arXiv preprint arXiv:1212.0354 (2012).
consistency. Comparing multifractal width of both signals [5] Maity, Akash Kumar, et al. "Multifractal Detrended Fluctuation Analysis
having high Red values (clip 3+4 and clip 6) it has been found of alpha and theta EEG rhythms with musical stimuli." Chaos, Solitons &
that clips corresponding to anxiety have relatively higher Fractals81 (2015): 52-67.
complexity than the one corresponding to romantic emotion [6] Ihlen EAF. Introduction to Multifractal Detrended Fluctuation Analysis in
(as was seen in case of average Red values for these clips). Matlab. Frontiers in Physiology. 2012;3:141. doi:10.3389/fphys.2012.00141.
Similarly, higher green value (between joy and devotion) gave [7] Shang, P., Lu, Y., & Kamae, S. (2008). Detecting long-range correlations
higher multifractal width (indicating greater complexity). of traffic time series with multifractal detrended fluctuation analysis. Chaos,
This research, attempts to identify quantitatively the Solitons & Fractals, 36(1), 82-90.
correlation between color and music having emotion as the [8] Lan, Tong-Han, et al. "Detrended fluctuation analysis as a statistical
mediator. Apart from choosing a format to associate any color method to study ion single channel signal." Cell biology international 32.2
to emotion, this work for the first time, has applied the latest (2008): 247-252.
chaos based tools in non-linear methodology to verify the [9] Balkwill, L-L., & Thompson, W. F. (1999). A cross-cultural investigation
psychological data with the analysis of the non-stationary of the perception of emotion in music: Psychophysical and cultural cues.
Music Perception, 17, 43-64.
music stimuli, hence making the results more reliable and
rigorous in form. However, unless similar analysis with large [10] Barber, S. (1996). Adagio for strings [Recorded by New Zealand
no of samples of both music and colors are analyzed, it is Symphony Orchestra/James Sedares]. On Capricorn: The Samuel Barber
Collection [CD]. Port Washington, NY: Koch International L. P.
extremely difficult to arrive at a confident conclusion since
this type of investigation involves cross modal participation in [11] Baron-Cohen, S., Burt, L., Smith-Laittan, F., Harrison, J., & Bolton, P.
the brain involving very complex neurological process the (1996). Synaesthesia: Prevalence and familiality. Perception, 25, 1073-1079.
research of which is at present in very nascent form. [12] Boyatzis, C. J., & Varghese, R. (1994). Children’s emotional associations
with colors. The Journal of Genetic Psychology, 155, 77-85.
A. Authors and Affiliations
The template is designed so that author affiliations are not [13] Calkins, M. W. (1893). A statistical study of pseudo-chromesthesia and
repeated each time for multiple authors of the same affiliation. of mental-forms. The American Journal of Psychology, 5, 439-464.
Please keep your affiliations as succinct as possible (for [14] Cimbalo, R. S., Beck, K. L., & Sendziak, D. S. (1978). Emotionally toned
example, do not differentiate among departments of the same pictures and color selection for children and college students. The Journal of
organization). This template was designed for two affiliations. Genetic Psychology, 133, 303-304.

1) For author/s of only one affiliation (Heading 3): To [15] Clark, D. M., & Teasdale, J. D. (1985). Constraints on the effects of
change the default, adjust the template as follows. mood on memory. Journal of Personality and Social Psychology, 48, 1595-
1608.
a) Selection (Heading 4): Highlight all author and
affiliation lines. [16] Collins, M. (1929). A case of synaesthesia. Journal of General
b) Change number of columns: Select the Columns icon Psychology, 2, 12-27. Collier, G. L. (1996). Affective synesthesia: Extracting
emotion space from simple perceptual stimuli. Motivation and Emotion, 20, 1-
from the MS Word Standard toolbar and then select “1 32.
Column” from the selection palette.
c) Deletion: Delete the author and affiliation lines for the [17] Cutsforth, T. D. (1925). The role of emotion in a synaesthetic subject.
The American Journal of Psychology, 36, 527-543.
second affiliation.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 41


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

She has published many of her research papers in different peer reviewed
[18] Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A journals and international conferences.
developmental study of the affective value of tempo and mode in music. Souparno Ray is a research fellow in Jadavpur University and has
Cognition, 80, B1-B10. published papers in many conferences and journals.
.
Dipak Ghosh is a professor emeritus in Jadavpur University. HE worked
AUTHORS PROFILE as a professor in the department of Physics. Served as the head of the
department, Dean of Science respectively. Has supervised more than
Chndrima Roy is working as a assitant professor in the department of
thirty PhDs and has published coutless research papers of international
Electronics and Communication engineering in Heritage Institute of
level of contribution in multiple areas.
Technology. She has worked a bit in the research area of Quantitative
Feedback Theory and optiization. Her current research interest includes
analysis of brain functiong and nonlinearity usiing chaos based techniques.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 42


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Estimation of Visual Focus of Attention from


Head Orientations in a Single Top-View Image
Viswanath K. Reddy
Assistant Professor, Department of Electronic and Communication Engineering,
M.S. Ramaiah University of Applied Sciences, Bangalore, India

scenario, the focus of attention will be a person or an object of


Abstract — Visual focus of attention is normally a person or an interest. For example, students focus will be on the person
object with whom the user is interacting with. This is exploited to delivering a lecture or the slide being projected on the screen
get some useful insights on the impact or effectiveness of the and participants focus in a meeting would be on the person
speaker. Currently visual focus of attention is mostly computed
using images captured by camera looking at the face of the users.
who is speaking. This is called as Visual Focus of Attention
Such cameras are placed at a distance to cover more users but at (VFOA). The impact of the speaker in a meeting or the
less resolution leading to less accurate results. Since head effectiveness of the presentation slide used in a class could be
orientation also is close to the focus of attention, visual focus of made based on analyzing the VFOA over the entire duration.
attention of a group is computed by using the image captured by To estimate VFOA, it is important to identify the direction
a single top-view camera in this paper. The concept is in which the user looks. The orientations of head and eyes
investigated using basic image segmentation techniques to extract
the heads. The visual focus of attention of the group (speaker) is
both contribute for VFOA estimation. It is shown that using
then estimated based on the orientation of all the heads. The information from other sensors like microphone in addition to
algorithm is implemented using MATLAB and tested on the camera and fusing the results yield better accuracy in
dataset developed in-house. Here preliminary results show an estimating VFOA [1], [8]. As eye gaze is more synonymous
accuracy of 70% in detecting the visual focus of attention on a with the actual VFOA compared to head orientation and other
small dataset. The results are encouraging enough to extend the cues, many people have proposed estimation of VFOA based
work to deal with videos in real-time. The algorithm needs to be
tested with more robust database.
on it [9]-[11]. Experiments are even conducted in classrooms
[12]-[14] to track the visual attention of students on different
Index Terms—Image Segmentation, Non-obtrusive, Speaker object in power point presentation. But to detect gaze the eye
Detection, Single Top-view Camera, Visual Focus of Attention. movements should be captured from near frontal angle with
high resolution. The camera has to be fixed close to the eyes
as shown in Fig 1 [1] or on hats, which would be very
I. INTRODUCTION TO VFOA AND RELATED WORK uncomfortable in classroom situations. Hence the gaze
capturing hardware increases with the number of users as
R ecently emphasis is given to identify and develop
several smart cities in India. A city can be call “smart”
only if its constituent environments like houses, classrooms,
highlighted in [12], where the teacher has to repeat the same
lecture for 21 times for a class of 21 students, to record their
offices become intelligent. Few examples of intelligence eye gaze.
shown by these can be like automatically taking notes in an
intelligent classroom documenting a meeting and analyzing
the activities in a conference rooms, remotely switching on/off
the appliances in a home and so on. Smart/intelligent
environments should be able to perceive the user intentions
and respond based on the user needs [1], [2].
A person expresses most of his intentions through various
verbal or non-verbal cues like facial expressions, vocal
characteristics, gaze, body posture, gesture among many
others. The attention of a person will normally be on what he
does, whom and how he interacts with, what he wants [3]. It is
suggested that humans are generally interested in what they
look at [4], [5] and they normally look at what they interact
with [6]. Also there is a close relation between the individuals’
Fig. 1. Example of Gaze Capturing System
attention and eye gaze [7]. In a meeting or a classroom
It is observed from experiments that head orientation is
This work was submitted on July 30 2016. enough to predict the VFOA 89% of the time [1]. Head
Viswanath K. Reddy is with M. S. Ramaiah University of Applied orientation can be determined using magnetic sensors, camera,
Sciences, Bangalore, INDIA (e-mail: viswanath.ec.et@msruas.ac.in). Kinect 3D cameras and gyros. But camera based are the most

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 43


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

commonly used ones. A detailed survey on head pose Each of the step involved in estimating VFOA is explained in
estimation in computer vision is given in [15]. People have the following sections. The participants in Image 1 are
worked on extracting head pose from close-up images numbered as highlighted in the Fig 4. Person annotated as 3 is
capturing frontal face [1] or distant images which near frontal the person who is speaking. Hence he is the VFOA in this
[16]-[18]. Number of people covered in an image is inversely case.
proportional to the distance of the camera. We get few high
resolution images in close-up images or many low resolution
heads from a distant camera making it difficult to extract the
head orientation. A single top view camera can capture more
participants in a given environment at its highest resolution. In
this paper, it is proposed to estimate the VFOA from an image
captured by a single top-view camera using basic
segmentation algorithm.
Image 1 Image 2
II. ASSUMPTIONS AND DATABASE DEVELOPMENT
The speaker is detected in the images captured from a single
top-view camera. Dataset was not readily available for testing
the algorithm. A set of images were captured for testing the
algorithm. To prove the concept the algorithm is developed so
that it works under certain assumptions. The assumptions and
the dataset development are discussed in the following
sections. Image 3 Image 4

A. Assumptions
• The image is captured with the camera axis perpendicular
to the ground, in normal indoor daylight conditions
• More than two participants in each of the image
• Both male and female participants with black hair and
without any bags
• Heads of two people are clearly separated Image 5 Image 6
• The floor is not dark
• All participants are looking at one participant without
tilting their head
B. Database Generation
A number of images were captured as per the assumptions.
The images were taken in medium lighting conditions. The
scene here is that of a college balcony. The image was clicked
from the top maintaining the camera axis almost perpendicular Image 7 Image 8
to the ground. Most of the assumptions are satisfied in the
image captured. A total of ten images were captured with six
people in each of the image as shown in Fig 2. Five of them
are wearing light colored dress and one was wearing a blue
dress.

III. ALGORITHM FOR VISUAL FOCUS OF ATTENTION


In a group conversation, it is most likely the participants Image 9 Image 10
will look at the speaker. Hence the speaker is the Visual Focus
Fig. 2. Test Images
of Attention of all the participants. Estimating the VFOA is
same as estimating the speaker in this work. The steps A. Head Segmentation
involved in VFOA estimation are:
Simple image segmentation algorithm based on color
• Segmentation of heads in the image
threshold is used to verify the VFOA concept. MATLAB is
• Finding the orientation of each head used for algorithm development and visualization.
• Finding the intersection of the each of the orientations
• Select the intersection corresponding to VFOA
A flow chart for estimating the VFOA is shown in Fig 3.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 44


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Many pixels of the head region in different images are expected. The six head objects are shown in Fig 6(b)
observed to arrive at the intensity range(thresholds) for each of highlighting the centroid of one of the participant.
the color components. RGB values of one pixel corresponding
to each participant is shown in Fig 5. Based on the
X: 315 Y: 72
observations the thresholds are chosen for red(8-44), RGB: 21, 24, 39

green(17-50) and blue(29-60) color components.


X: 150 Y: 133
RGB: 23, 35, 47
X: 516 Y: 161
RGB: 21, 28, 38

Start

Image input

X: 432 Y: 353
Head Segmentation X: 68 Y: 322
RGB: 19, 29, 38
RGB: 12, 22, 34

X: 253 Y: 423
RGB: 12, 22, 32
Compute the Orientation
of Head Segments
Fig. 5. RGB Intensities in Image 1

Find the intersections of Binary Image - Image 1 Heads Identified - Image 1


all the Orientations

Identify the Intersection


Corresponding to the X: 65.4
Y: 307.5

Speaker

Stop a b
Fig. 6. Segmented Image(a) and Head Detection(b) in Image 1

Fig. 3. Flowchart for VFOA Estimation B. Computation of Head Orientation


The image with only head objects is considered for
extracting the orientation of the heads. Many properties of an
object in a binary image can be obtained using a built-in
MATLAB function regionprops. Some of the properties that
can be obtained with this command are Area, EulerNumber,
Orientation, BoundingBox, Perimeter, Centroid, Solidity,
MajorAxisLength and MinorAxisLength. The following
MATLAB command can be used to extract and store all the
parameters of each of the object in the binary image in a
variable named props.
props = regionprops(BW,’all’)
Centroid and orientation parameters are extracted in this
manner for each of the head object. A line is drawn passing
through the centroid at the orientation angle of the
corresponding head object to visualize the direction of the
Fig. 4. Test Image 1 – Speaker is 3
head as shown in Fig 7. The centroids of two heads are
Using these thresholds for RGB components, the color highlighted in Fig 7.
image is converted into a binary image. After few TABLE 1
morphological operations like filling, erosion and dilation, CENTROID COORDINATES OF DETECTED HEADS
segmented image is obtained. The segmented image obtained
after applying color thresholding and morphological 65 151 255 309 423 510
operations is shown in Fig 6(a) with a number of objects of 307 122 418 68 348 173
different areas. Based on observation the objects with
area<1600 are selected. This extracts six heads correctly as

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 45


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE 2
Orientation of Head through Centroid - Image 1 COORDINATES OF THE INTERSECTIONS

187 301
195 337
X: 151.4
Y: 121.9 198 352
223 338
251 340
255 319
260 298
285 341
X: 65.4
Y: 307.5
296 296
302 296
304 291

The average distance from each centroid to all the


intersections are computed using the distance formula given
by , ∑ . Table 3 shows the
Fig. 7. Head Orientation Through Corresponding Centroids in Image 1
average distances from each centroid to all intersections. The
C. Intersection of Head Orientations minimum distance is 108 from the centroid corresponding
A line equation in slope-intercept form is arrived for each to the speaker numbered 3 in Fig 4. From this it can be
combination of centroid and orientation corresponding to each inferred that Visual Focus of Attention of all the participants is
head. The coordinates of each line within the image are on Speaker 3. The result is subjectively verified and found to
calculated. be correct.
Considering two lines at a time, the point of intersection is TABLE 3
AVERAGE DISTANCE OF INTERSECTIONS TO CENTROIDS – IMAGE 1
computed using MATLAB. It can be observed for Image 1,
there are 11 intersections. The zoomed version of the
intersection region is shown in Fig 8. The intersections are 187 225 108 261 177 298
highlighted by black and white circles. White circle has three
In this way, the developed algorithm is tested using all the
intersections in it. The coordinates of all the intersections are
images in the database. Table 4 shows the VFOA estimated by
shown in Table 2. The intersections are considered in an order
the algorithm for each image in the database. Estimated
starting from the one with least column coordinate to the
VFOA values are compared with the manually annotated
highest column coordinate in the image.
VFOA for each image in the database. VFOA is not detected
for Image 9, as the threshold for the area of the head object
was not appropriate. The detection accuracy is 70%.

IV. CONCLUSIONS
An algorithm is developed to detect the visual focus of
attention of the participants in a group. The person who is
speaking is normally assumed as the VFOA in this work. The
head segmentation algorithm used is based on color and area
of the head objects. The VFOA detection accuracy achieved is
70%. This is very encouraging enough to try with robust
segmentation algorithms and extend the work for videos in
Fig. 8. Intersection Points for Image 1 real-time.
The orientation varies depending on the shape of the head,
D. Estimating the Visual Focus of Attention hair style and the angle with which the image is captured.
The centroids and intersections are points in a 2D plane. Based Thus, more robust algorithms to segment the head are to be
on the intersections, the speaker can be estimated as follows. developed and tested with a larger dataset under different
Let the centroids and intersections be represented as conditions.
where 1 6 1 11 for Image 1. The distance
from centroid to intersection is computed using
distance formula. The distance between any two points
, , is given by

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 46


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

17th ACM international conference on Multimedia, ACM, 2009, pp.


TABLE 4 681-684.
VFOA(SPEAKER) DETECTED VS ACTUAL VFOA [17] S.O. Ba and J.M. Odobez, "Recognizing visual focus of attention from
head pose in natural meetings," IEEE Trans. Systems, Man, and
Image Actual Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp.16-33, 2009.
Estimated VFOA
No. VFOA [18] M. Voit and R. Stiefelhagen, “Tracking head pose and focus of attention
3 with multiple far-field cameras,” in Proc. IEEE Conference on
1 3
Multimodal Interfaces (ICMI), 2006, pp. 281–286.
2 2 2
3 3 3
2
Viswanath K. Reddy was born in
4 1
1976 in India. He received his
5 5 5
Engineering degree in
6 3 3 Telecommunication Engineering from
7 1 2 Bangalore University in 1998 and M.
8 2 2 Tech. in Industrial Electronics from
9 5 NDa
Mangalore University in 2002.
2
He joined M. S. Ramaiah School of
10 2
Advanced Studies, Bangalore, India in 2004. Now he is
Accuracy 70%
currently working as Assistant Professor in the Department of
a
Not Detected Electronic and Communication Engineering in M.S. Ramaiah
REFERENCES University of Applied Sciences, Bangalore, India.
[1] R. Stiefelhagen, "Tracking focus of attention in meetings," in Proc. 4th
Mr. Reddy is a Life Member of ISTE and has co-authored a
IEEE International Conference on Multimodal Interfaces, IEEE book on Digital Signal Processing. His research interests
Computer Society, 2002, pp. 273. include signal processing, computer vision and machine
[2] L. Dong, H. Di, L. Tao, G. Xu, and P. Oliver, “Visual focus of attention learning.
recognition in the ambient kitchen,” in Proceedings of the Asian
Conference on Computer Vision, 2009, pp. 548–559.
[3] RACA Mirko, "Camera-based estimation of student's attention in class,"
Ph.D. dissertation, École Polytechnique Fédérale De Lausanne,
Switzerland, 2015.
[4] P. Barber and D. Legge, "Information Acquisition," in Perception and
Information Methuen, London, 1976.
[5] A. J. Glenstrup and T. Engell-Nielsen. (1995). Eye controlled
media:Present and future state. [Online] Available:
http://www.diku.dk/users/panic/eyegaze/
[6] P. P. Maglio, T. Matlock, C. S. Campbell, S. Zhai, and B. A. Smith,
"Gaze and speech in attentive user interfaces," in Proceedings of the
International Conference on Multimodal Interfaces, vol. 1948, LNCS-
Springer, 2000.
[7] M. Argyle and M. Cook, “Gaze and Mutual Gaze,” Cambridge
University Press, 1976.
[8] G. Garau, S. Ba, H. Bourlard, J. Odobez, "Investigating the use of visual
focus of attention for audio-visual speaker diarisation," in the
Proceedings of the 17th ACM international conference on Multimedia,
ACM, 2009, pp. 681-684
[9] R. Stiefelhagen, J. Yang and A. Waibel, “Estimating focus of attention
based on gaze and sound,” In Proc. 2001 Workshop on Perceptive User
Interfaces, ACM, 2001, pp. 1-9.
[10] J.-G. Wang and E. Sung, “Study on eye gaze estimation,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B, Cybernetics,
vol. 32, pp. 332–350, 2002.
[11] C. Morimoto and M. Mimica, “Eye gaze tracking techniques for
interactive applications,” Computer Vision and Image Understanding,
vol. 98, pp. 4–24, 2005.
[12] F.Y. Yang, C.Y. Chang, W.R. Chien, Y.T. Chien and Y.H. Tseng,
"Tracking learners' visual attention during a multimedia presentation in a
real classroom," Computers & Education, vol. 62, pp. 208-220, 2013.
[13] A.V. Maltese, J.A. Danish, R.M. Bouldin, J.A. Harsh and B. Bryan,
"What are students doing during lecture? Evidence from new
technologies to capture student activity," International Journal Of
Research & Method in Education, vol. 39 , no. 2, pp. 208-226, 2016
[14] H. Ghafarpour, "Classroom conversation analysis and critical reflective
practice: Self-evaluation of teacher talk framework in focus," RELC
Journal, 2016
[15] E. Murphy-Chutorian and M.M. Trivedi, “Head pose estimation in
computer vision: A survey,” IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 31, no. 4, pp. 607-626, 2009.
[16] G. Garau, S. Ba, H. Bourlard and J.M. Odobez, "Investigating the use of
visual focus of attention for audio-visual speaker diarisation," in Proc.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 47


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Face Recognition Under Varying Blur,


Illumination and Expression in an Unconstrained
Environment
Anubha Pearline.S Hemalatha.M
M.Tech, Information Technology Assistant Professor, Information Technology
Madras Institute of Technology Madras Institute of Technology
Chennai, India Chennai, India
anubhapearl@gmail.com

Abstract— Face recognition system is one of the esteemed passage of time. Automatic face recognition has demanding
research areas in pattern recognition and computer vision as long tasks in pattern recognition (PR) and artificial intelligence (AI)
as its major challenges. A few challenges in recognizing faces are [19].
blur, illumination, and varied expressions. Blur is natural while The following section II is about literature surveys on
taking photographs using cameras, mobile phones, etc. Blur can
be uniform and non-uniform. Usually non-uniform blur happens
illumination and expression in face recognition. In Section III,
in images taken using handheld image devices. Distinguishing or system architecture and functions of each module has been
handling a blurred image in a face recognition system is generally discussed. Section IV is about experiments on different
tough. Under varying lighting conditions, it is challenging to databases and results analysis. In Section V, conclusion and
identify the person correctly. Diversified facial expressions such future work have been summarized.
as happiness, sad, surprise, fear, anger changes or deforms the
faces from normal images. Identifying faces with facial
expressions is also a challenging task, due to the deformation II. ILLUMINATION AND EXPRESSION IN FACE
caused by the facial expressions. To solve these issues, a pre- RECOGNITION
processing step was carried out after which Blur and
Illumination-Robust Face recognition (BIRFR) algorithm was Vageeswaran, Mitra, and Chellappa (2013), provoked by
performed. The test image and training images with facial the problem of remote face recognition, the issue of
expression are transformed to neutral face using Facial identifying blurred and poorly illuminated faces is addressed.
expression removal (FER) operation. Every training image is The set of all images obtained by blurring a given image is a
transformed based on the optimal Transformation Spread convex set given by the convex hull of shifted versions of the
Function (TSF), and illumination coefficients. Local Binary image. Depending on this set-theoretic characterization, a
Pattern (LBP) features extracted from test image and blur-robust face recognition algorithm DRBF is suggested. In
transformed training image is used for classification. this algorithm we can easily incorporate existing knowledge
on the type of blur as constraints. Taking the low-dimensional
Keywords- Blur; Blur and Illumination- Robust Face
Recognition(BIRFR); Facial expression removal (FER); linear subspace model for illumination, the set of all images
Transformation Spread Function (TSF); Local Binary Pattern obtained from a given image is then shown by blurring and
(LBP). changing its illumination conditions is a bi-convex set. Again,
based on this set-theoretic characterization, a blur and
I. INTRODUCTION illumination robust algorithm IRBF is suggested. Combining a
Face recognition is one of the renowned research areas in discriminative learning based approach like SVM would be a
pattern recognition and computer vision considering its very promising direction for future work.
numerous practical uses in the area of biometrics, information Patel, Wu, Biswas, Phillips, and Chellappa (2012)
security, access control and surveillance system. Several other proposed a face recognition algorithm based on dictionary
applications of face recognition are found in areas such as learning methods that are robust to changes in lighting and
content-based image retrieval, video coding, video pose is proposed. This entails using a relighting approach
conferencing, crowd surveillance, and intelligent human– based on a robust albedo estimation. Different experiments on
computer interfaces. popular face recognition datasets have shown that the method
Recognizing a person or friend is an easy task for human is efficient and can perform importantly better than many
beings. One can easily recognize a person from his biometric competitive face recognition algorithms. Learning
characteristics but in computer vision recognizing a person is discriminative dictionaries is that it can tremendously increase
one of the most challenging tasks. Human faces are complex the overall computational complexity which can make the
and have no rigid structure. A person’s face changes with the real-time processing very difficult. Discriminative methods are
sensitive to noise. It is an interesting topic for future work to

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 48


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

develop and investigate the correctness of discriminative efficient for this purpose. Then, the FLMs were generated
dictionary learning algorithm. This algorithm is potent to pose, based on the proposed method, and finally face recognition is
expression and illumination variations. performed by iterative scoring classification.
Tai, Tan, and Brown (2011), together they have introduced Promising results were observed to handle the face pose in
two contributions to motion deblurring. The first is control and non-control (real-world) situations for face
formulation of the motion blur as an amalgamation of the recognition. It was demonstrated that efficiency of the
scene that has undergone a projective motion path. While a proposed method for pose-invariant face recognition was
straight-forward representation, this formulation is not used in improved in comparison to the state of the art approaches.
image deblurring. The advantages of this motion blur model
are that it is kernel-free and offers a compact representation
for spatially varying blur due to projective motion. In addition, III SYSTEM DESIGN
this kernel free formulation is intuitive with regards to the A. Problem Definition
physical phenomena causing the blur. The second contribution
is an extension to the Richardson- Lucy (RL) deblurring The scheduled work systematically addresses face
algorithm to incorporate motion blur model in a correction recognition under non-uniform motion blur and the merged
algorithm. The basic algorithm, as well as details on effects of blur, illumination, and expression. For this, a
incorporating state-of-the-art regularization, has been outlined. preprocessing step for expression removal step was carried out
A fundamental limitation is that the high-frequency details that after which Blur and Illumination-Robust Face Recognition
have been lost during the motion blur process cannot be algorithm. In Blur and Illumination-Robust Face Recognition
retrieved. The algorithm can only recover the “hidden” algorithm, features are extracted using LBP from the face
particulars that remain inside the motion blur images. Another image [2].
limitation is that this approach does not deal with moving or B. Proposed Solution
deformable objects or scene with significant depth variation. A
For a set of train images gcs and a test image, p, the
pre-processing step to isolate moving objects or depth layers
identity of the test image is to be found. The test image may be
out from background is necessary to deal with this limitation.
blurred, illuminated, along with varied expressions. In test
Other limitations include the problem of pixel colour
image, p, expressions are neutralized using Facial Expression
saturations and severe image noise.
removal (FER). The matrix Ac for each training (gallery) face
Ronen Basri and David W. Jacobs (2003) have proposed
is generated. The test image, p can be stated as the convex
that the set of all Lambertian reflectance functions obtained
consolidation of the columns of one of these matrices. For
with arbitrary distant light sources lies close to a 9D linear
recognition task, the optimal TSF and illumination coefficients
subspace. 9D space can be directly computed from a model, as
[Tc, αc,i] for each training image are computed [2]. Using
low-degree polynomial functions of its scaled surface normals.
facial expression removal, expressions are removed for
It gives us a new and effective way of understanding the
transformed image (blurred and illuminated).
effects of Lambertian reflectance as that of a low-pass filter on
lighting. C. System Architecture
Chao-Kuei Hsieh, Shang-Hong Lai, and Yung-Chang The diagram (Figure 1.) shows architecture for blur,
Chen (2009) described a new algorithm for expression illumination and expression invariant face recognition. A
invariant face recognition with one neutral face image per simple pre-processing technique for removing expressions
class in the training dataset has been proposed. The basic idea from test image and transformed image were done to form a
is to combine the advantages of the feature point labeling in reconstructed face image using wavelet transform. Wavelet
the model-based algorithms and the flexibility of the optical transform are now used to handle such variations.
flow computation to estimate the geometric deformation for A. Blur Invariant Face Recognition
expressive face images. If there had been 1, 2 … C face classes, then each class c
The computation time is greatly reduced in our system by contains gc, which denotes the train images of that class. Also,
using a standard neutral face image in the optical flow the Blurred test image, p belongs to any of the C classes. For
computation and warping process. The constrained optical each training face, transformed images have been formed and
flow warping algorithm significantly improves the recognition these images forms the column of the matrix Ac. The test
rate for face recognition from a single expressive face image image’s identity is obtained using reconstruction error rc in (1).
when only one training neutral image for each subject is The identity of p is found with the one having minimum rc [2].
available. rc= minT || p - AcT||22 + β || T ||1, subject to T ≥ 0 (1)
Ali Moeini and Hossein Moeini (2015), a novel approach Even though all the pixels contain equal weights, every
was proposed for real-world face recognition under pose and region in the face does not convey equal amount of
expression variations from only a single frontal image in the information. For this, a weighting matrix, W has been
gallery which was very rapid and real-time. To handle the introduced. This value was introduced to make algorithms
pose in face recognition, the Feature Library Matrix robust to variations like misalignment. For training the
(FLM)+Probabilistic Facial Expression Recognition Generic weights, training and test image folders of the dataset have
Elastic Model (PFER-GEM) method is proposed that is been used. The test image folder was blurred with Gaussian

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 49


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

kernel, σ = 4. The images taken were partitioned as block T is vector of weights


patches using DRBF (Direct Recognition) through which W, a weighting matrix uses weights similar to LBP. The
recognition rates were obtained independently. difference between g and AcT were calculated. Vector of
weights denoted by T, T ≥ 0 is always rounded off to nearest
integer 1(||T||1).
‘A’ represents matrix, A∈R N*NT, which was formed by
total number of transformations and overall number of pixels
in the image, NT and N. For each training face, a matrix Ac,
was generated. Finally, optimal TSF, Tc has been identified
using equation (3).
Training images and Blurred test images were partitioned
into non-overlapping rectangular regions for which LBP
histograms were extracted and histograms were concatenated
for building a global descriptor. LBP features of both train
image and Blurred test image were compared to find if they
almost match with each other. The output is the identity of the
test image. The following column contains algorithm for
BRFR. This algorithm was done and its results were taken [6].
B. Blur and Illumination invariant face recognition
Test image, p and set of train images, gc were given as the
input. Nine basis images are obtained for each training image.
Using equation (4), by finding optimal TSF, Tc and
Figure 1. Blur, Illumination and Expression invariant face recognition
illumination coefficients, αc,i, every image is transformed. LBP
features are extracted for transformed image and test image.
The LBP features were used in DRBF. LBP features of both Both the LBP features are compared to find whether they
Blurred test images and train images have been considered for almost match with each other.
the recognition rate. The regions around the eyes are given the [Tc, αc,i]= argminT, αi|| W (p -∑ αi Ac,iT)||22 + β || T||1
highest weights [21]. In Figure 2, the white region has highest subject to T ≥ 0 (4)
weights, the other colours have low weights and the darker Where,
regions have very low weights [6]. The reconstruction error, rc αc,i,, illumination coefficients
becomes as follows when the weighting matrix has been used, αi,, for i=1,2…9 are the linear coefficients
rc= minT|| W (g - AcT) ||22 + β || T||1, subject to T ≥ 0 (2) Tc, is optimal TSF (Transformation Spread Function)
The reconstruction error, rc has been responsive to small W, weighting matrix
pixel misalignments and hence, it is not preferable. Test p, test image
image, p represents the input for which LBP features were Ac, matrix for each gallery face
extracted. T, vector of weights
Equation (4) is solved in two steps. The first step considers
that there is no blur keeping hTm fixed and find the
illumination coefficients, αc,i to form 9 basis images. These 9
basis images are relit using calculated illumination
coefficients. From these images, matrix Ac is formed and Tc is
solved. Using Tc, blur is created for all the basis images. This
is the second step. This is done for nine iterations.
Algorithm 1: Blur, Illumination-Robust Face Recognition
(BIRFR)
Input: Blurred, illuminated test image p, and a set of training
images gc, c=1, 2… C.
Figure 2. Weight matrix Output: Identity of the test image.
For every train image, gc, Blur has been created through 1. For each gc,
optimal Transformation Spread Function(TSF) Tc. This
optimal TSF was found by, 2. Obtain nine basis images gc,i, i=1,2,….,9.
Tc= arg min T|| W (p- AcT) ||22 + β || T||1, for T≥ 0 (3)
Here, 3. Find optimal TSF Tc and illumination coefficients αc,i
Tc is optimal TSF in gcby solving equation (2).
W is weighting matrix
4. Transform (blur and re-illuminate) gc usingTcand αc,i
p represents Blurred test image
Ac is matrix for each training face image 5. Extract features of transformed gc.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 50


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

6. Compare the features of p with those of the TABLE I DATASETS


transformed gc.
Database No. of No. of Availability
7. Find the closest match of p. Name Images Parameters of Database
Included
C. Blur, Illumination, and Expression-Invariant Face
Recognition Yale Face 105 Expression, Open Source
Every test image, p is reconstructed using facial expression Illumination
removal (FER) step. In FER, the wavelet transform is done. Jaffe 29 Expression Open Source
The wavelet transform is a mathematical representation of
signals that decomposes a set of basis functions. These basis Cropped Yale 10128 Illumination Open
functions are known as wavelets. Test images aredecomposed and Pose Source
using 2D wavelet transform forming four sub-band images B. Experiment
(LL, LH, HL, HH). Image smoothing reconstructs face image. For experimental purpose, Viola-Jones algorithm was used
Image smoothing reduces the expression changes to a larger for cropping images of face databases mentioned above.
extent [1]. Viola-Jones algorithm is used for face detection. It makes
Set of training images is blurred and illuminated using development of human machine communication. Image
optimal TSF and illumination coefficients value. LBP features segmentation integrates the detected faces in composite
are extracted and compared to neutral face image and backgrounds and locates face features such as eyes, nose,
transformed train image. The output is the identity of the test mouth, lips. Viola Jones (VJ) works as follows:
image. Viola-Jones face detector uses features as an alternative to
pixels because features encode ad-hoc domain knowledge and
Algorithm 2: Blur, Illumination and Expression-Robust is quicker than pixel based system. Haar functions are
Face Recognition (BIEFR) uncomplicated simple features. These are also called Haar
Input: Blurred, illuminated and expression-variated test features. Three different types of features are used. These are
image, p, and a set of training images gc, c=1, 2… C. two-rectangle feature, three-rectangle feature and four-
Output: Identity of the test image. rectangle feature. The Figure 3. shows rectangular features
[19] [17] [13] [20] [22].
1. Obtain neutral face from p and gc for C-face classes
using FER.
2. For each gc,
3. Obtain nine basis images gc,i, i=1,2,….,9.
4. For each gc,
Figure 3. Two-rectangle features are shown in A and B. C depicts a
5. Find optimal TSF Tc and illumination coefficients αc,i three-rectangle feature. D depicts four-rectangle feature
by solving equation (3.4).
6. Transform (blur and re-illuminate) gc using computed C. BIRFR with Other Algorithms
Tc and αc,i. Comparisons were made for BIRFR with other algorithms
such as DFR, IRBF, SRC, CLDA for Cropped Yale dataset.
7. Extract LBP features of transformed gc. Whereas, Yale face dataset was compared with CLDA,
8. Compare features of neutralized p and transformed gc. Fisherface, and PCA to that of BIRFR algorithm. A major
point to be noted is that for both BIRFR and BIEFR
9. Find closest match of p. algorithms are for blurred images. Whereas, other algorithms
images are not blurred.
In [16], they used Dictionary-based face recognition
IV EXPERIMENTAL RESULTS AND ANALYSIS
(DFR) and observed a rate of 42.14% and DFR is less to
A. Datasets
BIRFR by 39.822%. In [21] they have used Illumination-
For experiments, three datasets have been used. One is Robust Recognition of Blurred Faces (IRBF), and obtained
Yale face database [7], the other is JAFFE database [14] and 48.57% which was found to be lesser than BIRFR by
Extended YaleB database. The datasets have been separated as 33.410%. In [5], they have used sparse representation-based
training image set and testing image set. The gallery images classification (SRC) algorithm and obtained 19.29%. This
form the training image set. The probe images form the testing SRC algorithm was less than BIRFR by 23.696% In [9], they
image set. The images were isolated as subjects’ folders in used Classical Fisher linear discriminant analysis (CLDA)[12]
gallery and probe images sets. algorithm and obtained 21% recognition rate. CLDA

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 51


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

algorithm was lesser to BIRFR by 36.986%. In BIRFR, the is less to BIRFR by 4.66%. Compared to other algorithms,
recognition rate obtained was 81.986%. BIRFR's recognition rate was higher and was found to be
86.67%.
TABLE II BIRFR AND OTHER ALGORITHMS FOR CROPPED
YALE DATASET D. BIEFR and Other Algorithms
ALGORITHM RECOGNITION Comparisons between BIEFR and other algorithms like
RATE (%) 2DGFD, 2D-DWT (2 Dimension- Discrete Wavelet
Transform), CT-WLD (Contour Transform-Weber Local
BIRFR 81.986 descriptor) were made for Yale face dataset. Whereas, for
JAFFE dataset CS, SRC, FLLEPCA, HO-SVD, and
DFR 42.14
Eigenfaces algorithms were compared with BIEFR.
IRBF 48.57
1.) JAFFE Database
SRC 19.29
CLDA 21 Pradeep Nagesh and et. al., [CS-Compressive Sensing]
viewed the different images of the same subject as an
ensemble of inter-correlated signals and assumed that changes
1.) Complete linear discrimination analysis (CLDA) due to variation in expressions are sparse with respect to the
whole image. They exploited this sparsity using distributed
Linear discrimination analysis (LDA) is focused at two compressive sensing theory, which enabled them to grossly
different kinds of classification problem, with the purpose of represent the training images of a given subject by only two
selecting the appropriate projection lines to find projection feature images: one that captures the integrated (common)
direction which is at maximum differentiated as two types of features of the face, and the other that captures the different
data points. In Although the LDA algorithm attains good expressions in all training samples.
classification effect in image recognition, two difficulties
arise: one is the high dimension vectors operations that grows TABLE IV BIEFR AND OTHER ALGORITHMS FOR
up computational complexity, and the other is that the within- JAFFE DATASET
class scatter matrix is always singular. In order to address ALGORITHM RECOGNITION
these problems, Fisherface [7] algorithm was proposed which RATE (%)
uses Principal Component Analysis (PCA) for dimension BIEFR 82
reduction before LDA. Nevertheless, the down side of PCA is CS [18] 89.94
the accompanied loss of some important identification SRC [5] [18] 90.1
information. Lastly, they applied CLDA algorithm for the FLLEPCA [2] [10] 93.93
second time feature extraction. By doing so, the proposed
HO-SVD [10] [11] 92.96
procedure can extract more features of identification ability
Eigenfaces 86
therefore. CLDA, Fisherface [7] [12].
In [5], they have used CS (Pradeep Nagesh and Baoxin Li
2.) Yale face dataset (2009)) algorithm and obtained 89.94%. They have used SRC
algorithm for expression invariant face recognition and
TABLE III BIRFR AND OTHER ALGORITHMS FOR obtained 90.1% in [5].
YALE DATASET FLLEPCA (Fusion of Locally Linear Embedding and
ALGORITHM RECOGNITION Principal Component Analysis) algorithm showed a result of
RATE (%) 93.93% in [3] [10]. In [10], HO-SVD (Higher Order Single
BIRFR 87.88 Valued Decomposition) showed a recognition rate of 92.96%.
In [11], Eigenfaces a recognition rate of 86% was obtained by
CLDA 85.19 Hua-Chun et. al.
[12]
2.) Yale Face Dataset
Fisherface 84.21
[12]
In [15], they have used, two-dimensional Gabor Fisher
PCA [23] 82 discriminant (2DGFD) and they have got 70.8% recognition
rate. In two-dimensional Discrete Wavelet Transform(2D-
DWT), the recognition obtained was 82.5%. In [10], Contour
In CLDA, the recognition rate obtained was 85.19% which Transform-Weber Local Descriptor (CT-WLD) has been used
was less to BIRFR by 1.23%. In Fisherface, the recognition and recognition rate obtained was 95.23%. Eigenfaces
rate obtained was 84.21% and was lesser to BIRFR by 2.21%. algorithm has been used in [10] and accuracy obtained was
In PCA [23], the recognition rate was found to be 82% which 86%.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 52


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE V BIEFR AND OTHER ALGORITHMS FOR 4. Ali Moeini and Hossein Moeini, “Real-World and Rapid Face
Recognition Toward Pose and Expression Variations via Feature
YALE FACE DATASET Library Matrix”, IEEE Transactions On Information Forensics And
Security, vol. 10, no. 5, May 2015.
ALGORITHM RECOGNITION 5. Andrew Wagner, John Wright, Arvind Ganesh, Zihan Zhou, Hossein
RATE (%) Mobahi, and Yi Ma, “Towards a Practical Face Recognition System:
Robust Alignment and Illumination by Sparse Representation”.
BIEFR 67.996 6. AnubhaPearline.S, Hemalatha.M, “Face Recognition Under Varying
2DGFD [15] [10] 70.8 Blur in an Unconstrained Environment”, IJRET: International Journal
2D-DWT [19] [10] 82.5 of Research in Engineering and Technology eISSN: 2179-1103 |
pISSN: 2181-7168, Volume: 05 Issue: 04, Apr-2016.
CT-WLD [10] 95.23 7. Bellhumer. P. N, Hespanha.J, and Kriegman.D, “Eigenfaces vs.
Eigenfaces [10] 86 fisherfaces: Recognition using class specific linear projection”, IEEE
Transactions on Pattern Analysis and Machine Intelligence, Special
V CONCLUSION AND FUTURE WORK Issue on Face Recognition, vol. 19, no. 19, pp.711—712, Jul. 1997.
8. Chao-Kuei Hsieh, Shang-Hong Lai, and Yung-Chang Chen,
A face recognition system for unconstrained environment “Expression-Invariant Face Recognition with Constrained Optical Flow
was developed using BIEFR algorithm. In this algorithm, LBP Warping”, IEEE Transactions On Multimedia, vol. 11, no. 4., June,
features were extracted for the blurred, illuminated, expression 2009.
variated probe image. Every image in the gallery set was 9. A.W. Galli, G.T.Heydt, P.F. Ribeiro, “Exploring the Power of Wavelet
Analysis”, IEEE, Oct. 1996.
transformed using optimal TSF and their LBP features were 10. Hemprasad Y. Patil, Ashwin G. Kothari and Kishor M. Bhurchandi,
extracted. A simple pre-processing step, FER was carried out “Expression Invariant Face Recognition using Contourlet Transform”,
and the reconstructed face images have been used for further Image Processing Theory, Tools and Applications, IEEE, 2014.
processing. LBP features of transformed image and blurred 11. Hua-Chun, and Z. Yu-Jin., “Expression-independent face recognition
based on higher-order singular value decomposition”, International
probe image were compared to find the best match. BIRFR Conference on Machine Learning and Cybernetics, proceedings,
and BIEFR algorithms were implemented and their results Kunming-China, July 2008.
have been discussed in chapter 4 for the three datasets 12. Kong Rui, Zhang Bing, “An Effective New Algorithm for Face
CroppedYale, Yale face database and JAFFE. BIEFR gives Recognition”, International Conference on Computer Science and
Intelligent Communication, (CSIC), pp. no. 390-393, 2015.
good results even when the probe and gallery set have images 13. Michael J. Jones and Paul Viola, “Fast Multi-view Face Detection”,
of various illuminations. It was observed that for BIRFR Mitsubishi Electric Research Laboratories, 2003.
algorithm, when CroppedYale was used, the recognition rate 14. Michael J. Lyons, Julien Budynek, & Shigeru Akamatsu, “Coding
obtained was 81.986% and for Yale face dataset, 87.88%. It Facial Expressions with Gabor Wavelets”, 3rd IEEE International
Conference on Automatic Face and Gesture Recognition, pp. 120-114,
was observed that for BIEFR algorithm, while JAFFE was Apr.,1998.
used, the recognition rate noticed was 82% and for Yale face 15. Mutelo.R.M, W. L.Woo.W.L, and S. S. Dlay, “Discriminant analysis of
dataset, 67.996%. The system works effortlessly and is robust the two-dimensional Gabor features for face recognition”, Computer
to conditions like blur, illumination and expressions. The Vision, IET,2(2): 19-49, 2008.
16. Patel.V.M, Wu.T, Biswas.S, Phillips.P.J, and Chellappa.R,
results were improved when expression was removed. “Dictionary- based face recognition under variable lighting and pose”,
The issues of face recognition were not completely solved IEEE Trans. Inf. Forensics Security, vol. 7, no. 3, pp. 954–965, June,
using BRFR, few problems such as illumination, expression 2012.
variations were combined to form a better face recognition 17. Paul Viola And Michael J. Jones, “Robust Real-Time Face Detection”,
International Journal of Computer Vision , vol. 57, no. 2, pp. 97-154,
system using BIRFR and BIEFR algorithms. Yet, certain 2004.
problems like occlusion, pose variations, images with make-up 18. Pradeep Nagesh and Baoxin Li, Dept. of Computer Science &
were not handled and it still stands as a stumbling block to an Engineering, Arizona State University, Tempe, AZ 85287, USA, “A
efficient FR system. As future work, this FR system can be Compressive Sensing Approach for Expression-Invariant Face
Recognition”, pp. no. 1518-1514, 2009.
extended to other challenges like pose, occlusion, and make- 19. Ronen Basri and David W. Jacobs, “Lambertian Reflectance and Linear
up. Subspaces”, IEEE Transactions On Pattern Analysis And Machine
Intelligence, vol. 14, no. 2, February 2003.
20. Tai.Y.W, Tan.P, and Brown M. S., “Richardson-Lucy deblurring for
scenes under a projective motion path”, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 33, no. 8, pp. 1003–1018, Aug. 2011.
REFERENCES 21. Vageeswaran.P, Mitra.K, and Chellappa.R (), “Blur and illumination
1. A. Abbas, M. I. Khalil, S.Abdel-Hay, H. M. Fahmy, “Expression and robust face recognition via set-theoretic characterization”, IEEE
Illumination Invariant Preprocessing Technique for Face Recognition”, Transactions On Image Processing, vol. 22, no. 4, pp. 962–972, Apr.,
Computer and System Engineering Department, Faculty of 2013.
Engineering, Ain Shams University, Cairo, Egypt, IEEE, pp. 59-64, 22. Viola.P and Jones.M, “Rapid Object Detection using a Boosted
2008. Cascade of Simple Features”, in Proceeding of Computer Vision and
2. Abhijith Punnappurath and Ambasamudram Narayanan Rajagopalan, Pattern Recognition, , pp. 511-518, 2001.
“Face Recognition Across Non-Uniform Motion Blur, Illumination, and 23. Yan.H, Wang.P, Chen.W.D, Liu.J, “Face Recognition Based on Gabor
Pose”, IEEE Transactions On Image Processing, Vol. 13, No. 7, pp. Wavelet Transform and Modular 2DPCA”, College of Computer
2067- 2082, Jul. 2015. Science Chongqing University of Technology Chongqing, China.
3. Abusham. E, Ngo.D, and Teoh. A, “Fusion of Locally Linear
Embedding and Principal Component Analysis for Face Recognition
(FLLEPCA)”, In Third International Conference on Advances in
Pattern Recognition, ICAPR 2005, Bath, UK, August, 2005.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 53


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

AUTHORS PROFILE

Ms.S.Anubha Pearline, received B.E. in


Computer science and Engineering, in 2014. She pursued
M.Tech, at Anna University, Madras Institute of Technology
Campus. Her area of interest lies in the area of Image
Processing.

Ms.M.Hemalatha, M.E. [Communication &


Networking] is an Assistant Professor at Anna University,
Madras Institute of Technology Campus. She is pursuing PhD
at IIT Madras, Area of Research: Pattern Recognition. She did
her M.E at Madras Institute of Technology and B.Tech
[Information Technology], at Panimalar Engineering College.
Her area of interest is on Image Processing and Networking.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 54


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Segmentation based Security Enhancement for


Medical Images
G. Vallathan K. Balachandran
Department of Electronics and Communication Engineering Department of Electronics and Communication Engineering
Pondicherry Engineering College, Pondicherry, India. Pondicherry Engineering College, Pondicherry, India.
gvallathan@gmail.com seeatuk@gmail.com
K. Jayanthi
Department of Electronics and Communication Engineering
Pondicherry Engineering College, Pondicherry, India.
jayanthi@pec.edu

Abstract— Medical image transmission is substantial in tumor. Generally fuzzy models typically use thresholding
information technologies in order to offer healthcare services at techniques to improve the non-enhanced tumors regions.
distant places. The features of the medical images may be In [6], G. Ulutas et al have proposed a secret image sharing
changed deliberately as the communication might yield through with reversible capabilities. In the proposed work, LSB is used
internet. Before considering the patient diagnostic assessments, for embedding the information in original image in which the
the doctor has to validate the reliability of region of interest in
the received image in order to avoid misdiagnosis. A novel
secret information is entrenched in the LSB plane of the
framework has been proposed to offer a secured healthcare original image. Conversely, it is simple to sense the presence
solution which comprises of Segmentation of region of interest of hidden information and also have little embedding capacity.
(ROI), steganography and encryption technique. This framework In [7], author has discussed Pixel value differencing
also ensures robustness for the embedded information in non- steganography in which these information bits are entrenched
region of interest and recovers ROI perfectly for diagnosis. In the into the pixel pair and correspondingly have inferior
proposed method, the medical image is segmented into ROI and embedding capacity with higher security along with additional
NROI regions using Bhattacharya coefficient segmentation computational time. Linjie Guo et al [8] has proposed an
technique. Patient medical information like blood pressure level, uniform embedding approach for efficient JPEG
sugar level, etc., is embedded into the non-region of interest.
Finally the entire image is encrypted using Logistic map
steganography in which the DCT coefficients are almost
encryption technique in order to ensure the security. laplacian distributed with small magnitude. As a result, most
Experimental outcomes prove that the proposed framework of the coefficients are altered due to the (no shrinkage) nsf5
affords robustness in terms of image quality, security and embedding strategy which happens around bin zero. In [9], Ki-
reliability which helps to alleviate misdiagnosis at the Physician Hyun Jung et al have proposed an enhanced embedding
end in a telemedicine application scenario. technique for exploiting modification direction, which
produces a high capacity and good PSNR compared with other
methods relating to the EMD. The idea behind the proposed
Keywords- Bhattacharya coefficient segmentation, EMD data hiding method is that each secret digit in a (2n+1)-ary
Data hiding technique, Logistic Map Encryption notational system can be carried by one cover pixel. By using
one pixel for cover data, the method achieves a capacity
I. INTRODUCTION
double that of the EMD method.
Tele health care services allows broadcast of medical In [10], the author has proposed 3D baker map encryption
information like image, data between patient and the doctors. technique. In this, it requires huge space for key and the
For treating the patients, medical images help doctors to sensitivity of the key is more. In [11], the authors have
determine the appropriate diagnostic measures [1, 2]. Medical proposed an advanced encryption scheme that yields improved
information plays a vital role in providing critical health care security because of the use of multiple chaotic signals.
services [3]. It is also important to check the validity of Additional common feature in chaos-based communication
medical information as it would have got altered in the algorithms is the combination of encryption and
communication media. When this scenario comes into picture, synchronization. Due to the external errors, the sensitivity of
data integrity of the region of interest (ROI) is the prime the synchronization is maximized. In [12], the author has
requirement in medical application for proper diagnosis. proposed a novel two-channel communication scheme using
Segmentation for MRI images plays a substantial role in chaotic systems. It involves all chaotic states and can be
medical practice. Automatic brain tumor segmentation endures preferred to yield robust sensitivity to the encryption error and
a challenging task in addition with computational complexity. consequently guarantee a high level of security.
MRI images are generally inferred by the doctor in terms of The designed framework is organized as follows: Section I
visual perception [4]. Statistical pattern recognition based surveys the introduction and existing techniques in
techniques [5] fails, moderately because huge distortions segmentation, data embedding and encryption. Section II
happen in the intracranial nerves due to the growth of the brain offers the proposed framework. Next in section III,

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 55


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

experimental outcomes and discussion are presented. Finally distinguished on a input image (I), when compared with a
the section IV provides conclusion drawn from the outcomes benchmark image (R) as in Fig.2 Subsequently, determine the
of the work. axis on an image the leftward partially works as the input
image I, and the rightward partially offers as the benchmark
II. PROPOSED FRAMEWORK image.

The proposed framework comprises of modules to perform


segmentation, data embedding and confidentiality as shown in
the form of a flowchart Fig.1. Initially the framework uses an
efficient segmentation algorithm to fragment the image into
ROI and NROI. The critical patient data is embedded in the
image using EMD data embedding algorithm [13]. The tumor
detection is done using Bhattacharya coefficient segmentation
algorithm. Subsequently the image is encrypted using logistic
map encryption technique and then transmitted. Similarly, at
the receiver, the image is decrypted and segmented into ROI
and NROI. Then, the concealed data is extracted from the
embedded image using which the ROI is checked at the
receiver side to avoid data loss in an image. From the obtained
result, it is observed that the framework affords a better image
quality in terms of PSNR, MSE and SSIM which helps to
alleviate erroneous diagnosis at the Physician end in the
telemedicine scenario. The entire process is executed in terms
of three phases, which is discussed in detail subsequently.
A. Segmentation Phase
The In [14], Jianping Fan`et al have proposed automatic
seeded region growing algorithm, along with a boundary-
oriented parallel pixel labeling technique and an automatic
seed selection method seed tracking algorithm. The seeds in
the region, which are positioned inside the temporal alteration
mask, are designated for making the regions of moving
objects. Experimental evaluation shows that the performance Fig.1. Framework of proposed model which comprises
of the work is better for large variety of images without the segmentation, Modified EMD data embedding and Logistic
necessity of adjusting any parameters. In [15], the author has map Encryption
proposed a watershed-based segmentation technique in which The change in region D is limited to remain a parallel axis
the least extensive alterations possible to an original image rectangle, which significantly targets the abnormality in an
was obtained. The algorithm has proven to be less disruptive, image. In Fig. 2, from the uppermost of the images a parallel
more flexible and also less sensitive to noise than conventional line is drawn at a distance S. Now deliberating the regions:
methods. A(s) = [0, W] x [0, S] and B(s) = [0, W] x [S, H] where W and
In [16], the author has made an analysis of different H are the width and the height of the trial image I and
segmentation algorithm out of which region growing algorithm benchmark image R is performed. Therefore A(s) and B(s) are
has more de-noising capability with highest PSNR value. In the the servings of image province individually above and below
proposed work, a Bhattacharya coefficient segmentation the aforesaid parallel line.
technique is involved. This is because it is fast, automated and
perfect segmentation algorithm that evades the above said
limitations by localizing a rectangular box, around the brain
tumor in an MRI image [17]. Bhattacharya coefficient
algorithm is applied over MRI brain tumor images to extract
the details pertaining to the tumor part of the brain. Boundary
box algorithm is employed in the segmentation in which the
query based on size and the location of the tumor is almost
quantified and it operates in two consecutive stages. Initially, Fig.2: Determining D from image I, using a benchmark image
the MRI images are employed to determine the bounded boxes Let E(s) signify the resulting score function which is given by:
on every image. Then to identify the tumor, the bounded boxes
have been clustered. These stages are designated in the E ( S ) = P1A( S ) , PRA( S ) , P1B ( S ) , PRB ( S ) (1)
subsequent sections. For finding bounding boxes on MRI
images, change detection principle has been used, where a where, P’s denote normalized intensity histograms, the
region of change (D) in the image is automatically subscripts designate whether the histogram belongs to trial

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 56


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

image (I) or benchmark image (R), and the superscripts B. Modified EMD Embedding Process
symbolize whether the histogram is calculated inside the Steganography technique entrenches the patient medical
region A(s) or inside the region B(s). For instance, PA(s) information into a cover image. In this, the entrenching
signifies the normalized intensity histogram of trial image I in process is achieved using Exploited Modification Direction
A(s). 〈 X, Y〉 denotes the amount of individual element (EMD) Algorithm. It involves each secret digit in a (2z+1) ary
between two trajectories X and Y. The innermost product notational system to be conceded by a cover pixel. By using
between square roots of two normalized histograms is known one pixel for cover data, the method attains a capacity twice
as Bhattacharya coefficient (BC) (shown in Eqn.2) which is a that of the EMD method. For a pixel value, gi is the cover
real number between 0 and 1 that calculates the relationship image data (153) as shown in fig.5 the function value is
among the two histograms. The bhattacharya coefficient value defined a
is 1, when the two standardized histograms are identical eg. a = (bi+ u) mod (2z + 1) (3)
E(S) = 1 - 0.8 = 0.2. Consider if 0.2 value is positive, the As an example, let the new pixel value of bi’ is 152 as shown
tumor is in right lobe while when the bhattacharya coefficient in fig.6 is obtained by using the eqn (4).
value is 0 the histograms are completely different such that the bi’ = bi + u (4)
value is negative, which denotes that the tumor is in left lobe In the extraction method, a secret digit c is calculated using
respectively. eqn (5)
BC ( a , b ) = ∑ a ( s ) + b ( s ) (2) c = bi’ mod (2z + 1) (5)
i
Once the upper histograms match very well, there is the
expectation of high score whereas the lower histograms have
huge mismatch. On the other hand, a low value of E(s) denotes
a low correlation between upper histograms, and a high
correlation between lower histograms. The Fig.3 depicts that
the detection of brain tumor for six different MRI images
using Bhattacharya coefficient based segmentation and fig.4
shows that the result of segmented image and extraction of
ROI in MRI images. For analysis, the data set has been taken Fig.5: An example of embedding method
from oasis-brains.org.

Fig.6. An example of extracting method


For example, let z = 2, bi=153, and c = 25. In the initial
circumstance, a is calculated by using equation (3) a = ( bi+ u)
mod (2z + 1) = (153+ 2) mod 5 = 0, (153+ 1) mod 5 = 4,
(153+ 0) mod 5 = 3, (153+ (1)) mod 5 = 2, (153 + (2)) mod 5
= 1 for every u value. Meanwhile a is identical to c when u =
1, a new pixel value bi’= bi + u = 153 + (-1) = 152 is attained
Fig.3:(a),(b),(c),(d),(e)Bhattacharya coefficient based as shown Fig.5 and the extraction function at the receiver is
segmentation output for six different MRI brain images also illustrated in fig. 6

C. Logistic Map Encryption


Finally, a distinct security enactment has been made to
make sure of high level security for image and data.
Encryption is performed using logistic map to ensure security
during transmission. It is a single dimensional chaotic system
with V output and input variable and two initial conditions V0
and λ, which are related using:
Vn + 1 = λ Vn (1-Vn) (6)
V€ (0 - 1) and λ€ (0 - 4), in which, the chaotic behavior is
achieved when λ € (3 - 4). In this encryption algorithm, a
Fig 4: (a) Input Image (b) Segmented Image (c) ROI logistic map has been used to shift and shuffle the pixels value

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 57


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

by using Pixels Mapping Tables (PMT).Two approaches are the SSIM value for region growing algorithm attains the value
involved in this algorithm explicitly pixel replacement and of 0.9824, whereas the proposed technique offers a high value
pixel scrambling approach. In pixel replacement approach we of SSIM (average of 0.99) which is clearly indicated in table I.
need to modify the values of the pixels, whereas in pixel Table II. illustrates that, the performance of the proposed
scrambling the pixels position has to modify. In this work, work is better than the conventional embedding algorithms in
pixel replacement approach has been used (i.e.) pixel (i, j) = terms of payload. Two distinct embedding algorithms namely
PMT1 (index) where index = mod (pixel (i, j) + shift, pixel LSB, nsf5 were compared with the proposed work for
value) and shift = C* random (logistic) with two Pixel different payloads, to analyze the performance in terms of the
Mapping Table (PMT) that is created by using logistic map significant metrics employed in Table I. The accomplished
(Key1). Pixel mapping table has been employed by which it results indicate that the proposed work achieves a substantial
contains the pixel values ranging from 0 to 255 in the enhancement in all the parameters consistently. This is owing
shambled order with the size 256 x1. to the fact that enhanced exploiting modification direction
technique associates different pixel clusters of the cover
III. EXPERIMENTAL OUTCOMES AND DISCUSSION image to symbolize more embedding directions with less
The proposed outcomes are accomplished with two sorts of pixel changes than that of the EMD method. By selecting the
imaging modalities like CT and MRI images. Two distinct appropriate combination of pixel groups, the embedding
segmentation algorithms namely region growing, watershed efficiency and the perceptional quality of the stego image is
were compared with proposed approach for five different brain enriched. From the obtained result, it is inferred that when the
tumor images. For all segmentation approaches, the payload is 0.01 bpac, there is a dramatic increase in PSNR
performance metrics such as Mean Square Error (MSE), Peak value from 62.9 dB to 66.07 dB and when the payload is 0.05
Signal-to Noise Ratio (PSNR) and Structural Similarity Index bpac, there is an improvement in PSNR value from 55.81 dB
(SSIM) were computed for comparison. The accomplished to 62.12 dB. Secondly, the MSE value reduces from 0.0548 to
outcomes indicate that the projected work attains a substantial 0.0160 and from 0.402 to 0.0179 when the payload is 0.01
enhancement in all the parameters habitually at the receiver and 0.05 respectively. Finally, the SSIM value for nsf5 attains
side. This is owing to the fact that it is fast, automated and a the value of 0.9856, whereas the proposed framework offers a
perfect segmentation algorithm that evades these sorts of high value of SSIM (average of 0.99) which is emphasized in
shortcomings by localizing a rectangular box, around the brain the table II. From the facts stated, the investigational
tumor in an MRI image. From the Table I. it is concluded that significances proved that the embedding efficiency and the
there is a dramatic rise in PSNR value from 57.794 dB to perceptional quality of the EMD algorithm are enriched.
63.107db, MSE value reduces from 0.0348 to 0.0160, SSIM
value also increases from 0.9856 to 0.99 for MRI images when
compared with conventional segmentation algorithms. Finally,

TABLE I. PERFORMANCE ANALYSIS OF DIFFERENT SEGMENTATION ALGORITHMS

Image Watershed Region growing Proposed


S.no
type
PSNR SSIM MSE PSNR SSIM MSE PSNR SSIM MSE
1 54.170 0.81692 0.0891 57.794 0.9856 0.0348 63.107 0.9965 0.0160
2 56.228 0.81682 0.0926 56.095 0.9776 0.0431 61.591 0.9937 0.0166
MRI Brain
3 47.24 0.81671 0.1016 58.23 0.9721 0.0422 59.089 0.9921 0.0167
image
4 49.917 0.81664 0.1295 60.004 0.9692 0.0479 62.389 0.9801 0.0198
5 59.365 0.81653 0.2946 57.2841 0.9683 0.0402 61.012 0.9872 0.0279

TABLE II. PERFORMANCE ANALYSIS OF PAYLOAD IN DIFFERENT STEGANOGRAGHY ALGORITHMS

S.No Payload
LSB nsf5 PROPOSED
(bpac)
PSNR SSIM MSE PSNR SSIM MSE PSNR SSIM MSE
1 0.01 51.2 0.81692 0.0891 62.9 0.9856 0.0548 66.07 0.9965 0.0160
2 0.02 48.2 0.81682 0.0926 59.81 0.9776 0.0431 65.91 0.9937 0.0166
3 0.03 46.4 0.81671 0.0996 58.23 0.9721 0.0322 64.89 0.9921 0.0167
4 0.04 45.2 0.81664 0.1295 56.87 0.9692 0.479 63.89 0.9906 0.0173
5 0.05 44.2 0.81653 0.1958 55.81 0.9683 0.402 62.12 0.9892 0.0179

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 58


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

IV. CONCLUSION
This proposal aims to put forth a secured telemedicine [5] N. Moon, E. Bulitt, K. Leemput and G. Gerig, “ Model-based brain and
tumor segmentation”, Proceedings of ICPR, vol. 1, pp. 528-531,
framework suitable for a health care scenario. In this proposed Quebec, 2002.
work segmentation, steganography and encryption algorithm [6] G. Ulutas, M. Ulutas and V. V. Nabiyev, “Secret image sharing with
are collectively used to conceal patient diagnostics reversible capabilities,” Int. J. Internet Technology and Secured
information in an image. This framework comprises of Transactions, Vol. 4, no. 1, pp.1–11, 2012.
segmentation algorithm in order to detect the tumor pixels in [7] K Suresh Babu, K B Raja, Kiran Kumar K, Manjula Devi T H,
an image. When the scenario of telemedicine applications Venugopal K R, and L M Patnaik, “Authentication of Secret
Information in Image Steganography,” IEEE conference TENCON ,pp.
comes into picture, data integrity of the region of interest 1-6, 2008.
(ROI) is the prime requirement in medical application for [8] Linjie Guo, Jiangqun Ni and Yun Qing Shi,” Uniform Embedding for
proper diagnosis. An unsupervised automatic method for brain Efficient JPEG Steganography”, IEEE transactions on information
tumor segmentation on MRI images is employed to obtain the forensics and security, vol. 9, no. 5, may 2014.
ROI part in order to detect the tampered region within or to [9] Ki-Hyun Jung, Kee-Young Yoo , ”Improved Exploiting Modification
Direction Method by Modulus Operation”, International Journal of
alter its semantics, the use of multimedia hashes turns out to Signal Processing, Image Processing and Pattern Vol. 2, No.1, March,
be an effective solution to offer proof of legitimacy in (ROI). 2009
Next the patient critical information is embedded into the [10] Yaobin mao, guanrong chen, shiguo lian,” A novel fast image
image using EMD algorithm to uphold the quality of the encryption scheme based on 3-d chaotic baker maps”, International
image. During the transmission of data through an unsecured Journal of bifurcation and chaos in applied sciences and engineering,
volume 14, issue 10, October 2002.
channel the privacy of patient data should be defended as per
[11] T. yang, c. w. wu, and l. o. chua, “cryptography based on chaotic
Health Insurance Portability and Accountability Act (HIPAA), systems,” IEEE trans. circuits syst. I, vol. 44, pp. 469–472, may 1997.
under Telemedicine Legal Policy. Finally, an encryption [12] Zhong-ping jiang ,”a note on chaotic secure communication systems”,
technique is employed using logistic map before the actual IEEE transactions on circuits and systems-I: fundamental theory and
transmission. In the last state, the encrypted stego image is applications, vol. 49, no. 1, January 2002.
transmitted over the channel to analysis the robustness of the [13] Vallathan, Balachandran, Jayanthi: “Provisioning of enhanced medical
image. Similarly at the receiver, the image is decrypted and data security and quality for Telemedicine applications”, International
segmented into ROI and NROI. Next, the concealed data is Conference on Recent Trends in Engineering and Material Sciences 17-
extracted from the embedded image and verified for data 19 March 2016.
integrity. The sequence of modules in the proposed framework [14] Jianping Fan, Guihua Zeng , Mathurin Body , Mohand-Said Hacid,”
affords a better image quality by means of PSNR, MSE and Seeded region growing: an extensive and comparative study”, Pattern
Recognition Letters 26 (2005) pp.1139–1156.
SSIM, which helps to alleviate misdiagnosis at the Physician
[15] Andr´e Bleau and L. Joshua Leon,” Watershed-Based Segmentation and
end in telemedicine applications. Region Merging”, Computer Vision and Image Understanding, pp.317–
370, 2000.
[16] Monika Xess, S. Akila Agnes, ”Analysis of Image Segmentation
REFERENCES Methods Based on Performance Evaluation Parameters”, International
Journal of Computational Engineering Research, Vol, 04, Issue, 3.
[1] Das, S., Kundu, M.K.: “Effective management of medical information [17] Jayalaxmi S. Gonal, Vinayadatt V. Kohir ,” Automatic Detection and
through a novel blind watermarking technique”, J. Med. Syst., 2012, 36, Segmentation of Brain Tumors using Binary Morphological Level Sets
(5), pp. 3339–3351. with Bounding Box,” Proceedings of 2013 3rd International Conference
on Computer Engineering and Bioinformatics (ICCEB 2013), November
[2] Li, X.W., Kim, S.T.: ‘Optical 3D watermark based digital image 23-24, 2013 pp. 37 -43.
watermarking for telemedicine’, Opt. Lasers Eng., 2013, 51, (12), pp.
1310–1320.
[3] Deng, X., Chen, Z., Zeng, F., Zhang, Y., Mao, Y.: “Authentication and
recovery of medical diagnostic image using dual reversible digital
watermarking”, J. Nanosci. Nano technol., 2013, 13, (3), pp. 2099–2107.
[4] N. Ray, Russell Greiner, Albert Murtha, “Using Symmetry to Detect
Abnormalies inBrain MRI,” Computer Society of India
Communications, vol.31 no. 19, pp. 7-10, January 2008.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 59


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Efficient Stereoscopic 3D Video Transmission over


Multiple Network Paths
Vishwa kiran S, Thriveni J, Venugopal K R Raghuram S
Dept. Computer Science and Engineering Pushkala Technologies Pvt. Ltd.
University Visvesvaraya College of Engineering Bangalore, India
Bangalore, India
nimmakiran@yahoo.com

Abstract—Sizable adoption of ICT enabled diagnosis techniques II. PROBLEM STATEMENT


by doctors and last mile Internet connectivity presence are
enabling largely the possibility of viewing ailment conditions of In general tele-medicine applications intends to use highest
persons in need of medical assistance remotely. Live 3D video possible video quality for live transmission and storage.
streaming technique happens to be most sought out technological Possible screen resolutions and required stereoscopic video bit
option for such need. Prevailing network conditions and video rates are listed in Table–I. Audio bit rates are not considered.
encoding techniques experiences limitations in efficiently Basic most hi-quality video resolution is 480p HQ, producing
streaming 3D video through available network bandwidth stereoscopic video at 3.072Mbps bit rate. Considering the
constraints. This work propose an unique technique of streaming available upload speed of various Internet service providers as
live stereoscopic 3D video content encoded using H.264/MVC mentioned in Table-II, none of the single network upload speed
algorithm by distributing its network stream into multiple RTP meets the minimum requirement of 3.072Mbps bit rate for live
sessions over multiple network path to a single Cloud aggregation streaming. In this scenario considering higher video resolution
server. greater than 480p HQ would not be feasible. To overcome the
mentioned limitation, this work proposes a unique network
Keywords-3D Video; H.264/MVC; 3G/4G; WiFi; ADSL; architecture utilizing multiple Internet connections to stream
Cloud aggregation server stereoscopic video to a cloud aggregation server. This work in
I. INTRODUCTION specific, analyzes the requirement of number of individual
Internet connections required to achieve less latent live
There is a growing trend of telemedicine/online - doctors streaming paths through simulation technique.
driving remote diagnosis and treatment facilities. Online
doctors can examine the patients remotely through video Further sections describe proposed architecture in detail and
conference. Currently there is a disparity in upload and briefly bring out literature survey conducted. Paper concludes
download speeds of Internet connectivity due to ADSL and with simulation results and inferences derived.
similar technology implementation. This puts a limitation for
home based patients to be diagnosed remotely using 3D video TABLE I. VIDEO BIT RATES, DERIVED FROM
HTTP://WWW.LIGHTERRA.COM/PAPERS/VIDEOENCODINGH264/
technology [1]. Proposed novel technique intends to utilize
more than one available Internet connection sources, like Name Resolution Mono Video Stereo Video
3G/4G, ADSL, VDSL, etc. WiFi+ADSL Internet link, and 4G (kbps) (kbps)
LTE, based Internet links are ubiquitous now. Hence, usability 240p 424x240 576 1152
of at least two network links is feasible. Figure 1 gives a 360p 640x360 896 1792
pictorial representation of proposed network scenario.
432p 768x432 1088 2176
480p 848x480 1216 2432
480p HQ 848x480 1536 3072
576p 1024x576 1856 3712
576p HQ 1024x576 2176 4352
720p 1280x720 2496 4992
720p HQ 1280x720 3072 6144
1080p 1920x1080 4992 9984
1080p HQ 1920x1080 7552 15104
1080p Superbit 1920x1080 20000 40000

III. LITERATURE SURVEY


Limitations of 4G [2] is very much evident in terms of
Figure 1. Multiple network path upload scenario.
spectrum allocation and is dependent on different countries.
More often due to regulatory reasons the available spectrum
is underutilized. Considering all odds still 4G networks

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 60


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 1. Upload Speeds of ISPs under Experimentation

happen to be more promising and efficient wireless Internet Authors Chaminda et al., in their work [3] have enlisted certain
connectivity option for medical applications [3]. challenges involved in 3D video capturing to viewing in remote
devices especially focused towards medical domain. One of the
TABLE II. MEASURED SPEED OF VARIOUS ISPS AT RANDOM DAY challenges being network bandwidth capacity for
AND TIME INTERVALS.
transmitting/streaming good quality or high definition 3D
Sample Set 1 – Sample Set 2 – Sample Set 3 – video. The same work claims current 4G’s bandwidth
Service Provider Mbps Mbps Mbps availability can cater only partially towards 3D video streaming
requirements. To overcome this limitation, the authors of [3]
Down Up Down Up Down Up
link link link link link link suggest to utilize asymmetric 3D stereoscopic encoding
techniques and other related techniques.
ISP – 1 4G 18.16 0.60 16.16 0.54 19.34 0.68
These proposed techniques are good, but are still limited by
ISP – 1 3G 4.38 0.08 3.10 0.29 4.43 1.36
additional video processing algorithm and again limited by 4G
ISP – 1 ADSL 1.88 0.25 2.20 0.42 2.15 0.39 capacity & ubiquitous implementation of 4G networks.
ISP – 2 Fiber Net 0.47 0.53 0.89 0.79 0.76 0.62 To overcome these limitations and to reduce computational
ISP – 3 VDSL 12.28 0.84 14.46 0.95 11.34 0.79 load on battery operated mobile hand held devices, we propose
a novel idea of splitting the streaming process of right and left
Stereoscopic video [4-16] encoding is one of the fundamental channel into at least two available Internet connections say one
technique in 3D video encoding schemes, there are numerous is 4G and the other is WiFi network. By utilizing WiFi network
efforts put in by research in efficiently encoding stereoscopic which is connected to Internet through ADSL links do not have
3D content and transmission over Internet and mobile high upload bandwidth capacity by itself, hence combination of
networks. H.264/MVC is one of the widely accepted encoding both WiFi/ADSL 4G is deemed to give high bandwidth
standard formats for 3D video streaming applications, and very capacity. 3D video processing is a challenging task [7], there
suitable for low latency and high speed networks [17-25]. are variations in video processing and encoding techniques and
each algorithm or standard has its own advantages and
There has been attempts to successful transmit/stream 3D limitations. In general H.264 is one of the major encoding and
Video for medical related applications over 4G networks [3]. decoding industry video standard.
Tele operation of medical assistance by expert doctors is reality
now and advent of 3D transmission & viewing patient’s It is evident that a stereoscopic 3D video consumes twice the
condition has given tremendous boost to this segment. bandwidth when compared to a standard video transmission.
H.264/MVC [23] happens to be the most preferred multi view
coding technique for commercial 3D applications and devices.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 61


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 3. Proposed Architecture of Multiple Network Transmission Technique.

An alternate approach of capturing, streaming and viewing of encoder is the key entity which calculates number of RTP
stereoscopic in corporate older H.264 AVC, where independent sessions required based on number of networks available and
of right channels and left are encoded individually. This is upload capacity in each network. TSM communicates to Rx
referred to as Simulcast [7]. Futuristic 3D video HEVC Session Manager (RSM) to open equal number of RTP
techniques will be based on H.264/MVC. receiving sockets. Interleaved CS-DON (I-C) packet based
H.264/MVC encodes both right and left view concurrently, mode [26] of MST in NAL is configured in the proposed
resulting in two interdependent Bit Streams (BS), followed by architecture. This mode enables interleaving and there by
a multiplexer. Which interleaves frames of each channel overcomes the limitation of low latency networks. Another
culminating into a single Transport stream (TS)[7]. Further the advantage of using this mode is to utilize the feature of
TS is packetized by Network Abstraction Layer (NAL) is Cross-Session Decoding Order Number (CS-DON) [26], this
various formats to suit the network requirement [3,21,26]. technique facilitates easy and efficient decoding order of all
Real-time Transport Protocol (RTP)[27-29] running over User RTP session packets.
Datagram Protocol(UDP) is most widely used approach for
streaming audio and video data. In the proposed architecture captured stereoscopic 3D
video is transmitted to Cloud Aggregation Server (CAS)
IV. ARCHITECTURE rather than transmitting to single or multiple viewers (using
multicast technique) directly. This approach gives flexibility
Proposed architecture exploits the possibility of Multi-
of storing captured video for further analysis and more
Session Transmission (MST) [2, 26] defined for H.264/MVC
importantly one cloud based viewing endpoint.
RTP sessions [28]. Transmit (Tx) & Receive (Rx) Session
Manager interfaces with NAL of both H.264/MVC encoder Round Trip Time (RTT) is used to estimate the upload
and decoder respectively to distribute and consolidate RTP speed capacity of each network connection available.
sessions on to multiple networks. Figure 3 depicts the Download speeds is not of matter of concern because almost
proposed approach pictorially. Tx Session Manager (TSM) at

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 62


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

all network types provide download speeds of 2Mbps or minimum of 40% of peak value for 20% or lesser overall
greater. Hence this work does not focus on this aspect. duration. Similarly 3G/4G networks maintained peak speeds
for duration 85% or more and minimum speed almost 90% or
V. SIMULATION AND RESULTS less compared to peak speed for 15% or lesser duration. Based
Comparative analyzes on the effect of network speed and on this study we have empirically developed these weighted
video streaming bit rates is carried out in this simulation. window functions. There are 100 samples and randomly one
Packet buffering trends and requirement of multiple networks
is analyzed along with maximum and minimum latency
encountered. GNU/Octave is used for simulation. 12 minutes
of video transmission is simulated and measurement samples
are taken at each second, thereby making the sample set equal
to 720 measurements. 1000 iterations of simulation were run
before concluding the results.
It is observed from our measurements (listed in Table–2),
that none of the single network’s upload speed is viable to
upload 480p HQ 3D video. In this regard we have considered
two networks one ADSL and other a 3G or 4G network. Two
networks is a reasonable set because majority of mobile
devices can simultaneously connect to WiFi and 3G/4G
networks. Since WiFi connectivity is symmetric and its speed
is relatively higher to any of the Internet connectivity speeds,
its effect on simulation is of less consequence and hence
ignored.
Bit rate of 3072 kbps for 480p HQ stereoscopic video has
been derived from Table–1. It’s implied that, if consolidated
Figure 5. Speed Probability Random Weight Function used for ADSL
upload speed is equal to or greater than 3072 kbps, throughout Network
the video streaming duration, only then the video is transmitted
without any delay or buffering. But in a practical network
scenario this is seldom true, and network speed is always a
dependent variable of various facts including network load,
distance between transmitter and receiver, environmental
condition, and power optimization settings.

Figure 6. Test1 Result

sample is picked for each second of simulation, Uniform


random distribution function is used to generate random values.
Simulation at different peak and average upload rates were
Figure 4. Speed Probability Random Weight Function used for 3G/4G conducted and corresponding latency rates observed are
Network tabulated in Table – III. Results corresponding to Test 1 to
Test 5 are depicted in Figure 6 to Figure 10. Time to Empty is
To simulate the effect of upload speed variance, weighted the number of extra seconds which is required to transmit the
random window function as shown in Figure 4 and Figure 5 is buffered 3D video’s RTP packets. Time to Empty parameter is
used. During speed measurements it was observed that ADSL considered as latency, since arrival of packets at CAS is
speeds peaked during 60% of the time, and reduced to a delayed due to insufficient upload speed.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 63


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE III. SIMULATION TEST RESULTS OF RTP PACKET BUFFERING AND TIME TO EMPTY.

Peak ADSL Peak 3G/4G Mean ADSL Mean 3G/4G Consolidated Time to Empty
Balance in
Test Upload Rate Upload Rate Upload Rate Upload Rate Mean Upload or Latency in
Buffer in kbp
in kbps in kbps in kbps in kbps Rate in kbps seconds
1 1050 2500 950 2132 3082 -7889.5 -3
2 1050 2500 954 2127 3082 -7624 -3
3 1050 2500 958 2122 3080 -6102 -2
4 800 2700 729 2345 3074 -2070 -1
5 420 1360 381 1153 1535 1.10638e+006 720

In comparison, test 5 results as shown in Figure 10 is


simulated using practically measured upload speeds of
networks/ISPs under consideration. Consolidated mean upload
speed is always lesser than the requirement of 3072kbps, and
buffer is enqueued almost linearly and Time to Empty is
computed to be 720 seconds. This signifies high amount of
latency, and shows that RTP packets are delayed almost by 12
minutes.
These simulation results justify the need of multiple
network connectivity to transmit 3D video with less or no
latency.

Figure 7. Test2 Result

In each of the specified tests, peak ADSL and GSM upload


speeds as tabulated in Table – III is multiplied by
corresponding Speed Probability Random Weight Function to
simulate the effect of upload speed jitter. This effect is
simulated every second, and the resulting speed is maintained
till next computation. Sub-graphs named “Bitrate fluctuations
in kpbs” represents the simulated jitter effect. This is
applicable to all figures from Fig-6 to Fig-10. At the end of
each iteration of simulation, resulting mean upload speed is
computed and tabulated.
The third sub-graph named “Buffer size on kilo bits”
represents the variation in enqueueing and dequeueing of RTP
packets for each test under consideration. Figure 6 to 9
represents simulation conditions of tests 1 to 4 as listed in Figure 8. Test3 Result
Table – III. It could be observed that, in tests 1 to 4 Figure 11 shows 1000 iterations of simulation tests to
consolidated mean upload speed either matches the required measure maximum and minimum latency to clear transmitter
upload speed or is greater than requirement. In our case the buffer. It was recorded that these values are 808 and 701
upload requirement is 3072kbps for 480p HQ stereoscopic seconds respectively. Maximum upload speed of ADSL and
video. Since the available upload speed in tests 1 to 4 suffices 3G/4G considered during these simulation results are recorded
simulation requirements, buffer is always in underflow in Test – 5, whose values are 420kbps and 1360kbps
condition and Time to Empty parameter is computed and respectively.
tabulated as negative value. This means in reality there is no
latency in transmission.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 64


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Where:
PUS = Peak Upload Speed in Kbps
SPRW = Speed Probability Random Weight per second
Various available ISPs could be grouped as a super set of
(1) as;
AISP = {ISP1, ISP2, ISP3, …, ISPn} ….(2)
Where AISP is Available ISP’s set.
Considering above, one can compute number of similar
ISPs required as:
Similar ISPs required = Required 3D Video Upload
Speed in Kbps divided by sum of product of each ISP’s
Peak Upload Speed and Speed Probability Random Weight

Figure 9. Test4 Results ∑


….. (3)

Where
R3DV is the required 3D Video Upload Speed in Kbps
is the peak Upload Speed
is the Speed Probability Random Weight
n is the number of ISPs available.
If the resulting value is greater than 1, it means that there
is deficit of upload speed and one or more ISPs have to be
added. If the resulting value is less than 1, then available
ISPs are more than required upload speed.
VI. CONCLUSION
This work proposes a unique solution of routing
stereoscopic 3D video stream encoded using H.264/MVC
Figure 10. Test5 Results algorithm over multiple network paths using multiple RTP
sessions. The need for such a solution is justified through
simulation and practical measurements. During trials it was
observed that upload speeds of at-least 3 ISPs measured
individually were not sufficient enough to transmit high
quality stereoscopic 3D video. Consolidated peak upload
speeds of either individual ISP or multiple ISP’s have to be
at-least 15% or higher than the encoded video bit rate. Hence,
two or more Internet connections are foreseen to successfully
transmit 3D video without any latency or jitter.
REFERENCE
[1] Vishwa Kiran S, Ramesh Prasad, Thriveni J, Venugopal K R and L M
Figure 11. Time to Empty buffer or latency variation over 1000 simulation Patnaik, “Mobile Cloud Computing for Medical Applications”, 11th
iterations. IEEE India Conference on Emerging Trends and Innovation (INDICON
2014), December 2014, Pune, India.
Based on the simulation results we propose the following [2] Martin, Jim, Rahul Amin, Ahmed Eltawil, and Amr Hussien.
equation (3) to compute the number of similar ISP connections "Limitations of 4G wireless systems." In Proceedings of Virginia Tech
to fulfill the upload speed requirements combination of each Wireless Symposium (Blacksburg, VA). 2011.
ISP’s Peak Upload Speed and it’s Speed Probability Random [3] Hewage, Chaminda TER, Maria G. Martini and Nabeel Khan. "3D
Medical Video Transmission over 4G Networks", Proceedings of the 4th
Weight function is considered as a set. International Symposium on Applied Sciences in Biomedical and
Communication Technologies. ACM, 2011
ISP = {PUS, SPRW} ….(1)
[4] Karim, H A, Hewage, C T, Worrall, Kondoz, A.M., "Scalable Multiple
Description Video Coding for Stereoscopic 3D", IEEE Transactions on
Consumer Electronics, Vol. 54, No.2, pp.745-752, 2008

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 65


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[5] Tseng, Belle L and Dimitris Anastassiou. "Compatible Video Coding of [26] Y K Wang, R Even, T kristensen, Tandberg, R Jesup,"RFC-6148: RTP
Stereoscopic Sequences using MPEG-2's Scalability and Interlaced Payload Format for H.264 Video".
Structure", International Workshop on HDTV, Vol. 94, 1994. [27] Y K Wang, T Schier1, R. Skupin and P Yue, “RTP Payload Format for
[6] Hewage et al, "Comparison of Stereo Video Coding Support in MPEG-4 MVC Video”, Retrived from http://datatracker.ietf.org/drafts/current/
MAC, H. 264/AVC and H. 264/SVC", Proc. of IET Visual Information [28] Teruhiko Suzuki, Miska M Hannuksela, Ying Chen, Shinobu Hattori,
Engineering-VIE07, 2007 “Study Text of Working Draft 5 of ISO/IEC 14496-10:2012/DAM 2
[7] Merkle, H Brust, K Dix, K Muller and Wiegand, "Stereo Video MVC extensions for inclusion of depth maps”, 2012
Compression for Mobile 3D Services", IEEE 3D TV Conference: The [29] Wenger, Y K Wang, Schier, RTP Payload format for Scalable Video
True Vision- Capture, Transmission and Display of 3D Video, 2009 Coding, Internet Engineering Task Force, 2011,
[8] Balasko, Hrvoje. "Comparison of Compression Algorithms for High
Definition and Super High Definition Video Signals", 2009
[9] Vetro A, Tourapis, A M, Muller and Chen, "3D-TV Content Storage and
Transmission", IEEE Transactions on Broadcasting, Vol. 57, No.2, pp.
384-394, 2011.
[10] Merkle, Wang Y, Muller, Smolic and Wiegand, "Video Plus Depth
Compression for Mobile 3D Services", Proc. IEEE 3DTV Conference,
Germany, 2009.
[11] Jin, L., Boev, A., Gotchev, A., & Egiazarian, “3D-DCT based Multi-
Scale Full Reference Quality Metric for Stereoscopic Video”, Technical
Article.
[12] Zund F, Pritch Y, Sorkine-Hornung A, Mangold S, and Gross T,
"Content-Aware Compression using Saliency-Driven Image
Retargeting", IEEE International Conference on Image Processing , pp.
1845-1849, 2013.
[13] Schwarz H, Marpe D and Wiegand, "Overview of the scalable video
coding extension of the H. 264/AVC standard", IEEE Transactions on
circuits and systems for video technology, Vol. 17, No. 9, pp.1103-1120,
2007.
[14] Stockhammer, Thomas, Miska M. Hannuksela, and Thomas Wiegand.
"H. 264/AVC in Wireless Environments", IEEE transactions on circuits
and systems for video technology, Vol. 13, No. 7, pp. 657-673, 2003
[15] Kovacs, Peter Tamas, et al, "Overview of the Applicability of H.
264/MVC for Real-Time Light-Field Applications", 3DTV-Conference:
The True Vision-Capture, Transmission and Display of 3D Video
(3DTV-CON), 2014.
[16] Mamatha R B, Keshaveni N, “Compartive Sutdy of video Compression
Techiques h.264/AVC”, International Journal of Advanced Research in
Comptuer Science and Sofware Engineering, Vol. 4, Issue 11, 2014.
[17] Aman Jassal, "H.265/HEVC Video Transmission over 4G Cellular
Networks", The University of British Columbia, 2016
[18] Wenger, Stephan. "H. 264/avc over ip." IEEE Transactions on Circuits
and Systems for Video Technolog, Vol. 13, No. 7, pp. 645-656, 2003.
[19] Stockhammer, Thomas, Miska M. Hannuksela, and Thomas Wiegand.
"H. 264/AVC in Wireless Environments", ." IEEE Transactions on
Circuits and Systems for Video Technolog, Vol. 13, No. 7, pp. 657-673,
2003
[20] Micallef, Brian W., Carl J. Debono, and Reuben A. Farrugia. "Error
Concealment Techniques for H. 264/MVC Encoded Sequences." IEEE
Proc. of Int. Conf. of Electrotechnical and Computer Science (ERK),
Portoroz, Slovenia, 2010
[21] Kordelas, Athanasios, Tasos Dagiuklas, and Ilias Politis. "On the
performance of H. 264/MVC over lossy IP-based networks." Signal
Processing Conference (EUSIPCO), 2012 Proceedings of the 20th
European. IEEE, 2012.
[22] Micallef, Brian W., and Carl J. Debono. "An Analysis on the Effect of
Transmission Errors in Real-Time H. 264-MVC Bit-Streams", 15th
IEEE Mediterranean Electrotechnical Conference(IEEE MELECON),
2010.
[23] Liu Yanwei et al, "A Novel Rate Control Technique for Multiview
Video Plus Depth based 3D Video Coding", IEEE Transactions on
Broadcasting, Vol. 57, No. 2, pp. 562-571, 2011
[24] Mor Liu Zhao, et al. "Experimental evaluation of H. 264/Multiview
Video Coding over IP Networks", arXiv preprint alg-geom/9705028,
1997.
[25] Seo Kwang-deok, et al, "A practical RTP packetization scheme for SVC
video transport over IP networks," Etri Journal, Vol. 32, No.2, pp. 281-
291, 2010.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 66


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Speaker Dependent Speech Feature Based


Performance Evaluation of Emotional
Speech for Indian Native Language.
Shiva Prasad K M. G.N.KodandaRamaiah. M.B.Manjunatha.
Research Scholar, Electronics Engg, Professor, HOD and Principal,
Jain University, Bengaluru., India Dean R & D, A.I.T.,
kmsp1970@gmail.com Dept of ECE. K.E.C., Tumkur., India
Kuppam., India  

ABSTRACT: Speech is the most important bio these modalities, facial expressions and speech are
signal that human being can produce and known to be more effective for expression of
perceive. Speaking is the process of converting emotions. Speech Emotion Recognition (SER) can
find several applications such as call centres
discrete phonemes into continuous acoustic management, commercial products, life-support
signal. Speech is the most natural and desirable systems, virtual guides, customer service, lie
method of human communication as a source of detectors, conference room research, emotional
information to communicate with intentions and speech synthesis, art, entertainment and others. The
emotions. it is a time continuous signal main goal of emotional speech analysis/recognition
containing information about message, speaker is to identify the different basic emotional
states(Primary emotions) and to categorize them
attitude, language accent , dialect and emotion
like positive(non negative) and negative
many psychological studies suggests only 10% emotions.[3]
of human life is completely unemotional the rest
involves emotional. This paper aims to various Emotion speech analysis is carried out at
speech features used for the analysis of speaker various levels like segmental, sub segmental and
utterance of emotional speech of human being supra segmental. The performance analysis of
emotion recognition is mainly depends on the
which are analyzed by using speech analysing
speaker and the phonetic information. Emotional
software’s(PRATT) and MATLAB. The features are broadly classified as spectral features
emotional analysis helps in the study of what and prosodic features. Most emotional speech can
emotions are and how the speech features be attributed to the larger segments known as supra
properties change for different emotional state segmental features or prosodic features. The
of a human being for different languages as prosodic features are rhythmic, intonational and
subject. speaking rate properties in speech. prosodic
features mainly estimated over the uttered speech
Keywords: Emotion, Human Communication, Speech sentence in the form of long term statistics.[5] [6]
Processing, speech features,
Prosody features are resultants of vocal
fold activity and the spectrum peaks which are
I.INTRODUCTION: influenced by the vocal tract. The prosodic features
gives the utterance level characteristics. Prosodic
Speech analysis helps in phonetic features include mean and variance of fundamental
description, linguistic variability, Text To Speech frequency and energy. The micro prosody features
development (TTS),Automatic Speech Recognition of speech are jitter, shimmer and Harmonics to
(ASR), Speaker identification and for speech noise ratio (HNR)and noise to harmonic ratio
pathology and rehabilitation. Emotional speech (NHR).the performance evaluation of emotional
research (analysis/recognition/synthesis) is a speech considered is carried out.
multidisciplinary field of research with large
contributions from psychology, acoustics, Indian languages:
linguistics, medicine and computer science Most of the Indian language (except a few
engineering. such as English and Urdu) share a common
phonetic base, i.e., they share a common set of
During previous years, many researchers speech sounds, and in addition possess a few more
have worked on the recognition of nonverbal sounds individually. This common phonetic base
information, and have especially focused on consists of about 50 phones, including 15 vowels
emotion recognition. Many kinds of physiological and 35 consonants. While all these languages share
characteristics are used to extract emotions, such as a common phonetic base, some of the languages
voice, facial expressions, hand gestures, body like Hindi, Marathi and Nepali also share a
movements, heart beat and blood pressure. Among common script called Devanagari. Languages such

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 67


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500
as Gujarati, Panjabi, Oriya, Bengali, Assamese, corpus are Input device, input environment, number
Telugu, Kannada and Tamil have their own of speakers, speakers age, speaking style, speech
Brahmic scripts. mode, language, and purpose. For the analysis of
The property that separates these
emotional speech we considered IITKGP data base
languages at speech level can be attributed to the
phonotactics in each of these languages, rather than popularly known as SESC (Simulated Emotional
the scripts and speech sounds. Phonotactics are Speech Corpus).This corpora is the first Indian
permissible combinations of phones that can co- language (telugu language) designed and
occur in a language.This implies that the developed by the Indian Institute of Technology,
distribution of syllables in each language is Kharagpur. This speech database or speech corpus
different. Prosody (intonation, duration and is sufficiently large to analyze the emotions in view
prominence) associated with a syllable is another
of speaker, gender, text and session variability.[1]
property that separates Indian languages
significantly.[3] [10]

II.EMOTION: III. Speech Features:


As per American psychiatrist Allan
Emotional features are broadly classified
Hobson, Emotion is a mental and physiological
into spectral features and prosodic features. Most
state associated with large variety of feelings,
emotional speech can be attributed to the larger
thoughts and behaviour. In many cases it is not the
segments known as supra segmental features or
prime, what a person says but also how it is
prosodic features. The different speech features
expressed. Emotions are signals between animals
contributes in different ways for the identification
of the same species that communicate ones brain
or analysis of human emotional speech. The speech
state to another. No human being are non
features are mainly categorised as Prosodic features
emotional, we speaks emotionally, communicate
and Spectral features. Other feature which
emotionally and perceive others emotions also.
influences emotional speech analysis are:
Emotion is a reaction to situation, it express the
frequency characteristics/features, time related
personal meaning of an individual’s experience. [4]
features, durational pause related features, voice
Generally the speech is modulated when quality features/parameters, Zip-f features, hybrid
speaker’s emotion changes from neutral to other pitch features.[7]
states. Emotion to be expressed in two ways
Since most of research on emotion
namely, by changing the facial expressions and by
analysis/recognition is based on features and
changing of the intonation of voice. Emotion is
classification based approaches Features selection
important in human communication since it
determines the features which are most beneficial
provides feedback information in most of the
for analysis /recognition because most of the
situation. Emotional analysis is one of the
feature classifiers are negatively influenced by
challenging problem in speech processing because
irrelevant features.. Feature based approaches
the semantic gap between the low level speech
concentrates on analysing speech signals and
signal and highly semantic information.
effectively estimating feature parameters for
A. Emotional types representing human emotional states. But the
classification based approaches mainly focuses on
Generally there exists many types of designing a classifier to determine distinctive
emotions in real life namely, hot anger, cold anger, boundaries between different emotional states of
panic human speech.[8] [17]
fear,anxiety,despair,sadness,happiness,boredom,sha
me,pride,disguist and contempt. These emotions The prosodic features like pitch, intensity
will fall into two categories namely POSITIVE (loudness), duration, speaking rate and voice
emotion (happy), NEGATIVE emotion (anger, quality are more important to identify different
fear, sadness, disgust, boredom) and neutral types of emotions. In particular pitch and intensity
emotion is the reference emotion for all others said are seemed to be correlated for the amount of
above.[2] energy required to express a certain emotion

B.Emotional speech corpora or corpus: The prosodic features or parameters


considered are average duration of all utterances for
The main risk in the emotional speech a specific emotion, average pitch, standard
analysis is collecting the suitable database or deviation of pitch,average energy and average
speech corpus for study due to lack of availability intensity. Prosody features are resultants of vocal
of databases. The main attributes which affects the fold activity and the spectrum peaks which are
International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 68
Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500
influenced by the vocal tract. The prosodic features the pitch period within the analyzed voice sample.
gives the utterance level characteristics, prosodic It represents the relative period-to-period (very
features includes mean and variance of short-term) variability.[13]
fundamental frequency and energy. The micro
prosody features of speech are jitter, shimmer and Harmonic to Noise Ratio (HNR): HNR represents
Harmonics to Noise Ratio (HNR) and Noise to the degree of acoustic periodicity, also called as
Harmonic Ratio (NHR). [9][16] Harmonicity object. Harmonicity is expressed in
dB: if 99% of the energy of the signal is in the
Formants: these are the frequency components of periodic part, and 1% is noise, the HNR is
the human speech also known as resonant peaks. 10*log10 (99/1) = 20 dB. A HNR of 0 dB means
that there is equal energy in the harmonics and in
Pitch: It is the more sensitive factor responds to the the noise.
auditory sense called as fundamental frequency. It
is the time (period) interval between two successive IV. RESULTS AND DISCUSSION:
peaks of the speaker specific speech. In speech
processing, pitch helps to identify the gender of the For the purpose of emotional speech analysis we
speaker (female speaker pitch is comparatively have considered four random selected samples
higher than male speaker), Pitch is an important with basic emotions like Neutral, happy, sad and
characteristic of sound or acoustics. It provides anger from IIT-KGP SESC database viz., [1]
information about the sound’s source. It gives
Sample 1: “Annie dhanamuloo vidya danam
additional meaning to words (e.g., a group of words
vinna”- “ అ  ధ   ధ 0   ”. 
can be interpreted as a question depending on
whether the pitch is rising or not), and it helps to Sample 2: “Mana rashtra rajadhani Hyderabad”-
identify the emotional state of the speaker. Pitch “మన  ష     జ    ద ”. 
depends mainly on the frequency content of the
sound stimulus, but it also depends on the sound Sample 3: “Telupu rangu shanthiki chahanamu”-
pressure and the waveform of the stimulus. Pitch is “     ం    హ న ”
measured by the auto correlation function.[12]
Sample 4: “Ganga jalam pavitramainadhi”-
Pitch range: the pitch range refers to the dynamic “   జ 0 ప త న .”
range of pitch values over the uttered sentence. It
measures the frequency spread between maximum
and minimum frequency of an utterance. if the
pitch range value is high, the pitch value is shifted
from mean pitch value there by increasing the
dynamic range of pitch values.[15]

Pitch contour: it is the graph of pitch variations


over the uttered speech .the pitch contour of the
speech can be designed as rising, falling or a
straight contour. the contours are characterised by a
gradient. We can extract the pitch contour from the
voiced portions of the uttered speech.[11]

Shimmer: A frequent back and forth changes in (a)


amplitude (from soft to louder) in the voice.
Shimmer Percent provides an evaluation of the
variability of the peak-to-peak amplitude within the
analyzed voice sample. It represents the relative
period-to-period (very short-term) variability of the
peak-to-peak amplitude.[14]

Jitter: It is defined as varying pitch in the voice,


which causes a rough sound. Compare to shimmer,
which describes varying loudness in the voice.
Jitter is the undesired deviation from true
periodicity of an assumed periodic signal. Jitter
Percent provides an evaluation of the variability of
(b)
International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 69
Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

(c)

(c)

(d)

Fig 1. Represents the waveform of sample 4 for emotions


considered (a) neutral (b)happy (c) fear (d)anger

(d)

Fig 2. Represents the pitch contours of sample 4 for emotions


considered (a) neutral (b)happy (c) fear (d)anger

mean
6000 formant
frequency
5000
(hertz) f1
mean
(a)
4000 formant
frequency
3000 (hertz) f2
mean
2000 formant
frequency
1000 (hertz) f3
mean
formant
0 frequency
(hertz) f4

(a)
(b)

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 70


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

6000 mean 6000 mean


formant formant
5000 frequency 5000 frequency
(hertz) f1 (hertz) f1
mean
4000 4000 mean
formant
frequency formant
3000 frequency
(hertz) f2 3000
mean (hertz) f2
2000 formant mean
2000
frequency formant
1000 (hertz) f3 frequency
mean 1000
(hertz) f3
0 formant
mean
frequency 0 formant
(hertz) f4
frequency
(hertz) f4

(b)
(d)

Fig 3. Represents Mean Formant Frequency variations for


emotions considered:
6000 (a) sample 1 (b) sample 2 (c) sample 3 (d) sample 4.
mean
formant
5000 frequency Fig 2 & 3 provides the graphical representation
(hertz) f1 showing the variations in the parameters such as
4000 mean pitch contours and mean formant frequency for the
formant
various speech emotional samples considered for
3000 frequency
the analysis
(hertz) f2
mean
2000 formant
frequency
1000 (hertz) f3
mean
0 formant
frequency
(hertz) f4

(c)

TABLE 1: FOR SAMPLE 1

mean formant frequency (hertz) pitch jitter shimmer


local local local
time (sec) f1 f2 f3 f4 mean local (abs) (%) db

anger 1.3053 946.06 2508.7 3420.34 4848.95 220.21 2.58 118.59 10.528 0.956

fear 1.8592 844.45 2380.8 3650.25 5037.43 200.5 2.45 122.87 10.188 1.021

happy 1.4604 860.19 2501.7 3369.91 4472.48 169.68 3.079 182.3 12.047 1.141

neutral 1.6038 782.18 2435.5 3550.51 4985.1 170.78 2.366 138.97 9.688 0.962

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 71


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500
            

TABLE 2.SAMPLE 2.

mean formant frequency (hertz) pitch jitter shimmer


time local local local
(sec) f1 f2 f3 f4 mean local (abs) (%) dB
anger 1.1199 1075.76 2425.98 3692.17 4438.15 225.89 1.706 76.025 13.036 1.241
fear 1.6757 748.31 2302.28 3557.65 4785.93 191.28 2.617 137.61 12.467 1.101
happy 1.2761 940.01 2278.99 3430.92 4397.95 213.18 3.221 151.66 13.466 1.318
neutral 1.3099 1047.47 2505.19 3635.35 4426.01 159.19 2.933 184.79 13.373 1.246

TABLE 3 SAMPLE 3.

mean formant frequency (hertz) pitch jitter shimmer


time local local
(sec) f1 f2 f3 f4 mean local (abs) local (%) dB
anger 1.025 1099.01 2496.45 3643.21 4918.11 228.22 3.563 159.267 12.067 1.146
fear 1.323 926.37 2529.57 3731.45 5034.45 185.91 3.115 168.265 13.874 1.316
happy 1.226 965.58 2549.16 3709.54 4488.89 190.68 4.076 215.661 13.497 1.281
neutral 1.175 1025.16 2579.49 3671.22 4840.31 173.59 3.847 223.32 12.537 1.208

TABLE 4 SAMPLE 4.

mean formant frequency (hertz) pitch jitter shimmer


time local local local
(sec) f1 f2 f3 f4 mean local (abs) (%) dB
anger 1.026 913.22 2335.64 3484.58 5050.73 216.9 2.938 136.35 11.939 1.156
fear 1.3338 754.53 2420.34 3444.56 4911 182.6 2.769 152.83 14.191 1.196
happy 1.1206 1035.64 2446.15 3521.6 3972.75 160.54 3.417 215.215 13.25 1.21
neutral 1.1178 794.95 2374.77 3421.24 4855.6 164 3.205 197.19 14.43 1.314

2
1.8
1.6
1.4
1.2 Sample 1 time (seconds)

1 sample 2 time (seconds)
0.8 sample 3 time (seconds)
0.6 sample 4 time (seconds)
0.4
0.2
0
anger fear happy neutral

Fig 4. Transition time variation for the emotions considered

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 72


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

250

200

150 mean pitch (hz) sample 1
mean pitch (hz) sample 2
100 mean pitch (hz) sample 3
mean pitch (hz) sample 4
50

0
anger fear happy neutral

Fig 5. Graph showing Mean pitch(Hz) variation for the emotions considered for all the four samples.

TABLE 5: showing variations of acoustic features of different emotional states considered.

emotion Pitch Pitch Pitch Pitch Intensity Intensity Transmission Voice quality
mean range variance contour mean range duration
anger high broader large declines increases high longer Breathy/moderate
ly timbre
fear increas high Very inclines increases high less Danger/afraid
es high
happy high high high raising high high faster Pleasure/anxious

Neutral normal  normal  normal  normal normal normal normal  Calm & weak

Application of emotion speech analysis/ V.CONCLUSION:


recognition:
The analysis/recognition of emotion is not straight
The analysis of human emotional expressions help forward procedure. The emotional analysis is made
in different environments, the outcome is used to on individual basis for each speaker and is essential
alter the system reactions. The emotion speech for all speakers simultaneously. The acoustic
recognition can recognise the speaker’s emotion by feature based realizations of emotions are language
using speaker speech samples. Emotional speech dependent and speaker dependent. The emotional
analysis/recognition is necessary in human speech analysis/detection requires creation of
computer interaction, In human robotic interface, reliable database and selection of suitable features
To design an intelligent and interactive voice and classifiers for quick and accurate
responding system,to develop an intelligent spoken analysis/detection. It is very difficult to analyse
tutoring system for detecting and adapting students qualitative differentiation of emotions from vocal
emotions for high performance teaching learning expressions. Neutral is an emotional state where
process. Emotion analysis/recognition from speech emotionally lacking is noticeable. It is moderately
can be used as a tool for assisting virtual reality of negative, calm and weak. anger is having highest
cure phobias. mean value, variance of pitch and high mean value
of energy. It is also characterised by the low values
Table 5 showing the summary of the important of glottal velocity and pitch contours of all
acoustic observations of various emotional states syllables are falling .fear is characterised by
considered. increase both in mean pitch, pitch range and
International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 73
Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500
increase in energy and also articulation rate. Happy [16] L. He and M. Lech, N.C. Maddage, N.B.Allen. Study of
empirical mode decomposition and spectral analysis for stress
is characterised by the tense voice with faster and emotion classification in natural speech.BiomedSignal
speech rate, moderately positive, high mean pitch Process Control. 6, 139-146 (2011).
and broad pitch range. it is also characterised by the
[17] B. Schuller and G. Rigoll, M. Lang. Speech emotion
high glottal velocity and the pitch contours are recognizing combining acoustic features and linguistic
rising. information in a hybrid support vector machine-belief network
architecture, in proceeding of the ICASSP, 1, 397-401(2004)

REFERENCES Authors profile

[1] S. G. Koolagudi, S. Maity, V. A. Kumar, S. Chakrabarti, and


K. S. Rao, IITKGP-SESC : Speech Database for Emotion Mr.Shiva Prasad K M.,
Analysis. Communications in Computer and Information FIE,METE,LMISTE.,purs
Science, JIIT University, Noida, India: Springer, issn: 1865- uing Doctrate from Jain
0929 ed., August 17-19 2009.
University Bangalore in
[2] T. L. Nwe, S. W. Foo, and L. C. D. Silva, “Speech emotion emotional speech analysis
recognition using hidden Markov models,” Speech stream,with total
Communication, vol. 41, pp. 603–623, Nov. 2003. experience of 18 years
.obtained B.E from Bangalore University and
[3] Shiva Prasad K.M.,Anil Kumar C., M.B.Manjunatha.,
KodandaRamaiah G N., Various front end tools for digital M.Tech from VTU Belgaum. Served various
speech Processing, in IEEE pg 905-911, 2015, Print ISBN: 978- institutions at different levels of responsibilities and
9-3805-4415-1. also guided more than 40 UG projects and having
more than 40 national publication and 25
[4] S. McGilloway, R. Cowie, E. Douglas-Cowie, S. Gielen, M.
Westerdijk, and S. Stroeve, “Approaching automatic recognition international publications.serving as G.C.member
of emotion from voice: A rough benchmark,” (Belfast), 2000. for IETE Bangalore region.

[5] F. Dellaert, T. Polzin, and A. Waibel, “Recognising


emotions in speech,” ICSLP 96, Oct. 1996.
Dr.G.N.KodandaRamai
ah., currently working
[6] D. Ververidis, C. Kotropoulos, and I. Pitas, “Automatic as Professor & HOD in
emotional speech classification,” pp. I593–I596, ICASSP 2004,
IEEE, 2004.
the Dept of ECE,
Kuppam Engg College,
[7] A. Iida, N. Campbell, F. Higuchi, and M. Yasumura, “A Kuppam. Andra
corpus based speech synthesis system with emotion,” Speech
Communication, vol. 40, pp. 161–187, Apr. 2003.
Pradesh, with total
teaching experience of
[8] C. Gobl and A. Chasaide, “The role of voice quality in 22 years. Obtained Doctorate from JNTU
communicating emotion, mood and attitude,” Speech
Communication, vol. 40, pp. 189–212, 2003. Anantapur. Actively participating in Research
activitites by owing 8 patents and also guided
[9] K. S. Rao, R. Reddy, S. Maity, and S. G. Koolagudi, more 50 UG projects & 30 PG projects and
“Characterization of emotions using the dynamics of prosodic
features,” in International Conference on Speech Prosody, currently guiding 5 Research scholars, having
(Chicago, USA), May 2010. more than 50 national publications and 60
international publications
[10] K. S. Rao, S. R. M. Prasanna, and T. V. Sagar, “Emotion
recognition using multilevel prosodic information,” in Dr.M.B.Manjunatha.,
Workshop on Image and Signal Processing (WISP-2007), currently working as
(Guwahati, India), IIT Guwahati, Guwahati, December 2007. Principal at Akshaya
[11] S. R. M. Prasanna, B. V. S. Reddy, and P. Institute of
Krishnamoorthy, “Vowel onset point detection using source, Technology,
spectral peaks, and modulation spectrum energies,” IEEE Trans.
Audio, Speech, and Language Processing, vol. 17, pp. 556–565,
Tumkuru, with total
May 2009. teaching experience
of 21years. Obtained Doctorate in Speech
[12] L. R. Rabiner and B. H. Juang, Fundamentals of Speech
Recognition. Englewood Cli s, New Jersy: Prentice-Hall, 1993.
processing from Magadh University. Served as
placement officer, HOD, Vice principal and
[13] T. Pao and Y. Chen, J. Yeh,Y. Chang. Emotion recognition also acted as institutional incharge for
and evaluation of mandarin speech using weighted D-KNN
classification. Int. Innov. Comput. Info. Control. 4, 1695-
NAAC,ISO,AICTE & NBA in various
1709(2008). institutions. Successfully guided more than 45
UG projects & 30 PG projects and currently
[14] H. Altunand, G. Pollat., Boosting selection of speech
related features to improve performance of multi-class SVMs in
guiding for 5 research scholar, having 50
emotion detection.Expert Syst. Appl. 36, 1897-8203(2009). natonal publications and 30 international
publications.
[15] M. L. Yang. Emotion recognition from speech single
using new harmonyfeature. Single Process. 90, 1415-
1423(2010).

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 74


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Formant Frequency Based Analysis of English


vowels for various Indian Speakers at different
conditions using LPC & default AR modeling
Anil Kumar C. M.B.Manjunatha. G.N.KodandaRamaiah.
Research Scholar, Electronics Engg, Principal, Professor, HOD and
Jain University, A.I.T., Dean R & D, Dept of ECE.
Bengaluru., India. Tumkur.,India. K.E.C.,Kuppam.,India
canilkumarc22@gmail.com

ABSTRACT: The paper presents formant frequency exploring frequency modulated, amplitude modulated and
estimation for English vowels. Auto-Regressive (AR) model time-modulated carriers (example: resonance movements,
and LPC (Linear Predictive Coding) based estimation of harmonics, noise, pitch, power, duration). The objective is to
formant frequency has been carried out here for different convey information on words, speaker identity, accent,
recordings. The authors propose to use this autoregressive speech style and emotion. The entire gamut of information
model and LPC for utterances made by speakers of south India
on different occasions and in various recording conditions.
is basically conveyed in the large of the traditional telephone
Samples of speeches have recorded consuming ice cold water bandwidth of 4 kHz. Speech energy 4 kHz reflects audio
and having an instant check of its effect and again later after 5 quality and sensation. [2]
minutes. Formant frequencies obtained are compared to those
with recorded under normal conditions. Vowel utterances II. Speech Production.
made by speakers have been recorded 20 times, variations in
the results, for different conditions on intra speaker basis. The Speech is means of communication. It has the distinct
authors propose to investigate those and also the variability as feature as a signal that carries a message or information. It is
a parameter for the speech recognition process. The entire known to be an acoustic waveform that carries a message or
process has been done using MATLAB.
information and the acoustic waveform that can carry
Keywords: Auto regressive model, Formant frequency, LPC, temporal information from the speaker to the listener.
Signal processing. Efficiency underlies acoustic transmission and reception of
  any speech. But this is applicable only for transmission over
I. Introduction. a short distance. There is a spread of radiated acoustic
Speech is the most accepted and convenient means of energy at frequencies that are used by the vocal tract and
communication which is well known and recognized. The ear. But this gets reduced with intensity rapidly. [3] [13]
narrow concept of speech is that it is just a sequence of
sounds punctuated by abrupt changes happening from one to Even on occasions when the source is able to produce
another or some signals that are ignored and go into oblivion substantial volume of acoustic power, there is a support of
soon after utterance. It is much more than that. It is a unique only a fraction thereof by the medium without any
signal that conveys information of linguistic and non distortion, while the rest of the it gets squandered in air dust
linguistic type. Such information conveyed by speech is for particles molecular disturbance. It is also results in getting
knowledge of multiple levels speech signals typify (typify) over aero molecular viscosity. The ambient acoustic noise
information bearing that come up as a function of a single places a restriction or limit on the sensitivity of the ear.
independent variable like time . Speech is not just an Physiological noses to play same role in and around the ear
information signal. It is something beyond that. It is actually drum. Voluntary, formalized motions of the respiratory and
a complex wave and acoustic output arising as a result of the masticators apparatus have the speech as the acoustic end
speaker’s effort.[1] product. The closed loop has the ability to develop, control,
maintain and correct it. Acoustic feedback of the hearing
Speech analysis is synonymous with feature extraction of mechanisms, and the kine-sthate feedback of the speech
speech. Speech sounds are sensations of air pressure musculature too have a role here. The central neurons
variations produced by exhaled air and later modulated and system organizes and coordinates information from the
shaped by vibration of glottal cords and the resonance of senses which is then used for directing the function as also
vocal tract during the time air is pushed out through the lips for delivering the descend, linguistically dependant, vocal
and the nose. Speech is signal with information galore articulator motion and acoustic speed. [4][7]

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 75


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 1 gives a simplified view of speech communication


A. Problem Statement. route starting from the speech and reaching the listener. An
The vowel speech database considered in this paper is by idea or a concept originates at the linguistic level of
considering all five vowels of English at three different communication, in the speakers mind. It is the stimulated
conditions. Database is created by us at normal recording longitudinal acoustic wave propagation in air starting from
room environment, vowels (/a/,/e/,/i/,/o/,/u/) to be analyzed the speaker and terminating with the listener. [8]
with respect to formant frequency variability.

B. The Speech Communication Pathway.

Fig 1. Speech communication pathway.

these signals in a recognized pattern which becomes known


defining the produced sound. The speech is in random as a language speech perception and hearing. [12]
nature generated due to excitation of so many organs in the
human body which consists of complex and simple resonant C. Formant frequency:
frequencies called Formants. The essentiality of measuring
formant frequencies gives an idea of finding out utterances Excitement of a fixed vocal tract produce vowels with quasi-
in speech and also the voice quality.[5] periodic pulses of air, forced through the vibrating vocal
cords. A quasi-periodic puff of air flow is the source, acting
through vibrating vocal folds at a definite basic frequency.
Pressure changes within the vocal tract are caused by the The term “quasi” is used considered with perfect periodicity
vocal tract and vocal cord movement. These are seem more
specifically at the lips, initiating the sound wave that is D. Linear Predictive Coding (LPC)
known for propagation in space, this propagation activity
occurs through space as a sequence of compassion and save Linear predictive analysis is among the most
faction of air dust molecules. As a result, temporal pressure powerful and widely used speech analysis techniques. The
variations are noticed at the listener’s exterior ear, which is method has emerged as a predominate technique for
funnel shaped collecting this acoustic energy efficiently. estimating the basic speech parameters, e.g., pitch, formats,
Later it manages to carry the media vibration ultimately to spectra, vocal tract area function. Render this method as
the final vole ration sensor , the ear drum set in the interior highly important ability to provide accurate estimates of the
ear [2][9]. speech parameters and in relative speed of computation.[6]
The ability of approximation seen by a speech sample as a
Variation in pressure experienced at the speakers lips causes linear combination of past speech samples is the basic
sound. This sound propagates, with channel losses, resulting concept that explores linear prediction analysis. The
in pressure variations at the listeners outer ear. The eventual determination of a unique set of predictor coefficients is
vibrations in the ear-drum induces electric signals which done by reducing the sum of the squared differences (over a
move along the sensory noses to the brain. To the extent of finite interval) between the actual speech and the linearly
the listeners perception the brain decodes these electrical predicted ones to the minimum. The predictor coefficients
signals known for the sensitivity to language. Later, it filters re the weighted coefficients used in the linear
combination.[4] [6]

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 76


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Fig. 2 Simplified model for speech production for LPC.

Fig 2 shows the simplified model of speech production for frequency of 22,100Hz in MAT LAB and the samples were
providing idea of linear prediction is intimately linked to the normalized to have amplitude normalization by considering
basic speech synthesis model in which the sampled speech English vowels by Indian English speakers (a,e,i,o,u). [1]
signal was shown as modeled as the output of a linear,
slowly time-varying system excited by either quasi-periodic
Speech Pre- Hamming
impulses (during voiced speech), or random noise (during
unvoiced speech). The linear prediction method provides a signal emphasis window
robust reliable and accurate method for estimating the
parameters that characterize the linear, time-varying system.
AR
an all pole system function describes the linear system over
intervals of short duration. A time varying digital filter with MODEL
known steady state system function represents the
composite spectrum effects of radiation, vocal tract and
glottal excitation.[11] Formant Vocal tract
estimation transfer function
E. Forward-Backward (FB) auto regressive model for
Formant Estimation.
Fig 3. Block diagram for default AR model formant
Formant Frequency Estimation Algorithm for the speech estimation
analysis based on Forward-Backward (FB) in auto
regressive model is shown in Figure 3. Figure depicts the
estimation process for formant frequency estimation. By the III. RESULTS & DISUSSION.
model of speech production, it is analyzed that the speech
encounters a spectral tilt of –20dB/decade, hence to match
this, pre-emphasis filter is used to flatten the spectrum and
boost the higher frequencies, followed by Hamming
window, by obtaining the AR Model and Vocal Tract
transfer function. From the frequency spectrum of the Vocal
Tract transfer function, the formants are extracted. [10]

F. Speech Database:

20 samples of 20 subjects (male speakers aged about 18-25)


at different times and at different conditions i.e., normal
sample, by the instant of consuming ice cold water and with
time lapse of 5 minutes were recorded using table top mic (a)
make of i-ball Model No M27 with sensitivity -58dB ±3dB,
frequency response of 100Hz to 16kHz, with sampling

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 77


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Figure 4 shows the recorded speech sample for the vowel /a/
in all the three different conditions it can be clearly noticed
the shift in the speech transition time in the figure. Similarly
all the other four vowels were recorded and observed the
variation in the transition time for the three different
conditions.

900
800
700
600
500
400
300 normal
200
100 ice cold
(b) 0
five min

ar_fb

ar_fb

ar_fb

ar_fb

ar_fb
lpc

lpc

lpc

lpc

lpc
a e i o u
f1 (Hz)

Fig 5 (a)

2500
2000
1500
1000
normal
500
ice cold
0
ar_fb

ar_fb

ar_fb

ar_fb

ar_fb
lpc

lpc

lpc

lpc

lpc
five min

(c) a e i o u

Fig 4. Recorded speech sample of vowel /a/ (a) normal f2 (Hz)


(b) by consuming ice cold water (c) after five minutes of
ice cold water consumption. Fig 5 (b)

TABLE 1: Mean Formant Frequency f1 for the all the vowels at three various conditions.

Mean f1 (Hz)
a e i o u
lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb
normal 646.03636 612.18209 290.24804 291.32943 672.04429 647.53608 485.78492 495.06655 439.85147 426.10873
ice cold 474.5618 460.34875 460.41055 482.99677 854.464 830.62063 483.53912 469.64243 807.65435 760.19553
five min 534.13713 514.05292 356.49856 351.10009 737.86742 687.52753 515.97832 487.47977 772.58747 736.92003

TABLE 2: Mean Formant Frequency f2 for the all the vowels at three various conditions.

Mean f2 (Hz)
Vowel /a/ /e/ /i/ /o/ /u/
lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb
normal 1620.3607 1517.1281 1760.0784 1702.7908 1775.2124 1657.1006 972.49609 968.4073 1496.825 1436.3321
ice cold 1785.251 1689.3834 2134.8212 2078.1335 1734.2823 1664.4509 965.34183 934.61768 1658.2033 1541.877
five min 1743.4963 1736.4056 1976.5384 1984.8818 1791.2401 1657.8592 1066.8032 1020.8186 1641.6667 1545.9573

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 78


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

3000
4000
2500 3500
2000 3000
2500
1500 2000
normal 1500 normal
1000
ice cold 1000 ice cold
500 500
five min five min
0 0

ar_fb

ar_fb

ar_fb

ar_fb

ar_fb
lpc

lpc

lpc

lpc

lpc
ar_fb

ar_fb

ar_fb

ar_fb

ar_fb
lpc

lpc

lpc

lpc

lpc

a e i o u a e i o u

f3 (Hz) f4 (Hz)

Fig 5 (c) Fig 5 (d)

Fig 5. Mean Formant Frequency variability for all the three


conditions considered (a) f1 (b) f2 (c) f3 (d) f4.

TABLE 3: Mean Formant Frequency f3 for the all the vowels at three various conditions
Mean f3 (Hz)
Vowel /a/ /e/ /i/ /o/ /u/
lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb
normal 2338.4056 2144.2687 2349.8735 2292.883 2442.0181 2306.1086 1888.4018 1739.9349 2227.0988 2080.3389
ice cold 2213.7282 2097.3323 2602.2587 2512.4837 2253.6379 2138.7426 2355.1313 1632.054 2480.365 2246.105
five min 2335.5912 2265.3495 2475.2071 2382.0561 2472.7629 2271.0133 2374.9275 1664.5661 2386.4126 2178.9476

from fig 5 (d) f4 is comparatively higher for normal sample


Figure 5 interpreting the four different formant frequencies /a/ and low for the normal sample /o/.
variability i.e f1 to f4, plotted for all the mean values using
LPC and AR modeling with different bar chart Table 1 – Table 4 showing the detailed tabulated mean
representation. From the above bar chart representation formant frequency values for all the vowels /a/,/e/,/i/,/o/ and
following observation can be drawn: /u/ with LPC & AR-FB method for all the three recordings

from fig 5 (a) f1 is comparatively higher for ice cold sample


/i/ and low for the normal sample /e/.
Figure 6 gives overall comparative graph of the all the
from fig 5 (b) f2 is comparatively higher for ice cold sample samples i.e., vowels and showing the significant
/e/ and low for the normal sample /o/. representation of the three conditions considered i.e.,
normal, consuming ice cold water and time relaxation of
from fig 5 (c) f3 is comparatively higher for ice cold sample five minutes.
/e/ and low for the normal sample /o/.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 79


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

TABLE 4: Mean Formant Frequency f4 for the all the vowels at three various conditions
Mean f4 (Hz)
Vowel /a/ /e/ /i/ /o/ /u/
lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb lpc ar_fb
normal 3348.7498 2970.6265 3074.6989 2878.1543 3169.7349 2961.7258 2690.452 2511.3353 2862.833 2674.8642
ice cold 2970.271 2751.231 3280.729 3137.3099 3179.0703 2952.5652 3376.3344 2566.8026 3360.7236 3028.9773
five min 3145.7905 2993.7702 3164.7015 3042.5215 3276.1846 3037.5383 3459.3933 2694.705 3415.4331 3071.0632

4000
a ar_fb
3500
a lpc
a lpc
3000
e ar_fb
2500 e lpc
e lpc
2000
i ar_fb

1500 i lpc
i lpc
1000 o ar_fb
o lpc
500
o lpc

0 u ar_fb
f1 f2 f3 f4 f1 f2 f3 f4 f1 f2 f3 f4 u lpc
normal ice cold after five minutes

Fig 6.summarized formants representation for the vowel samples and three different conditions

IV. CONCLUSION conditions refereed herein. AR_FB also provides the


satisfactory results for the normal recording conditions. At
Using LPC and forward backward auto regressive model the clear observation it is illustrated that in LPC based
analysis of speech, formant frequencies such as f1, f2, f3 and analysis suited for the envelope detection in the speech
f4 for all the 20 speakers with 20 different samples for three signal well utilized for the vocal tract shape as an future
various conditions are estimated and comparative graphs are enhancement. Formant frequencies can be used as a part of
plotted individually. The speech input signal is separated personal passwords or signature for verification and
into frames each frame length into 30ms with an identification for authentication of speakers
overlapping of 10ms frame length the order of LPC filter is
25. Sampling rate of the speech signal is 22,100 Hz. It has REFERENCES
been noticed that formant frequency of the ice cold water
lies between the values of normal and time lapse of five [1]. Anil Kumar C., Shiva Prasad K.M., M.B.Manjunatha.,
minutes. Measurement of the frequency of an individual KodandaRamaiah G N., Basic Acoustic Features Analysis of Vowels And
taken at different occasions and in different contexts as also C-V-C of Indian English Language, in ITSI-TEEE,vol -3 Issue-1, pages
the variability of the desirable frequencies are observed for 20-23 (2015).
LPC and forward backward Autoregressive model. it has [2]. Shiva Prasad K.M.,Anil Kumar C., M.B.Manjunatha.,
been noted that LPC provides the desirable results for the KodandaRamaiah G N., Gender based Acoustic Features and Spectrogram

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 80


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dr.M.B.Manjunatha., currently
analysis for kannada Phonetics, in ITSI-TEEE,vol -3 Issue-1, pages 16-19
working as Principal at Akshaya
(2015).
Institute of Technology, Tumkuru,
with total teaching experience of
[3]. Shiva Prasad K.M., Anil Kumar C., KodandaRamaiah G N.,
21years. Obtained Doctorate in
M.B.Manjunatha., Speaker based vocal tract shape estimation for kannada
Speech processing from Magadh
vowels, in IEEE pg 1-6 2015, DOI: 10.1109/EESCO.2015.7253942.
University. Served as placement
officer, HOD, Vice principal and
[4]. Shiva Prasad K.M., Anil Kumar C., M.B.Manjunatha.,
also acted as institutional incharge
KodandaRamaiah G N., Various front end tools for digital speech
for NAAC,ISO,AICTE & NBA in various institutions. Successfully
Processing, in IEEE pg 905-911, 2015, Print ISBN: 978-9-3805-4415-1.
guided more than 45 UG projects & 30 PG projects and currently
guiding for 5 research scholar, having 50 natonal publications and 30
[5]. Anil Kumar C., Shiva Prasad K.M., M.B.Manjunatha.,
international publications.
KodandaRamaiah G N., Vocal Tract shape estimation of Vowels & C-V-V-
C for diversified Indian English Speakers, in IEEE pg 1-7 2015, DOI:
10.1109/EESCO.2015.7253941.

[6]G.N.Kodandaramaiah,Dr.M.N.Giriprasad and Dr.M.Mukundarao.


‘Implementation of LPC based vocal tract shape estimation for vowels’ The Dr.G.N.KodandaRamaiah., currently
Technology world quarterly journal march-April 2010. Volume V issues working as Professor & HOD in the
ISSN 2180-1614 pp 97-102 Dept of ECE,Kuppam Engg
. College,Kuppam.Andra Pradesh, with
[7] M. S. Shah and P. C. Pandey, “Estimation of vocal tract shape for VCV total teaching experience of 22 years.
syllables for a speech training aid,” in Proc. 27th Int. Conf. IEEE Engg. Obtained Doctorate from JNTU
Med. Biol. Soc., 2005, pp. 6642–6645. Anantapur. Actively participating in
Research activitites by owing 8 patents
[8]. Mehrdad khodai-joopaari,Frantz Clermont,Michael barlow,speaker and also guided more 50 UG projects
variability on a continuum of spectral sub-bands from 297-speakers’non- & 30 PG projects and currently
contemporaneous cepstra of japans vowels, proceeding of the 10th guiding 5 Research scholars, having
Australian intl conference on speech scicence and technology, 2004. more than 50 national publications and
60 international publications
[9] Gautam Vallabha & Betty Tuller, “CHOICE OF FILTER ORDER IN
LPC ANALYSIS OF VOWELS”, From Sound to Sense: June 11 – June
13, 2004 at MIT.

[10] Stephen A. Zahorian, A. Matthew Zimmer, and Fansheng Meng,


“Vowel Classification for Computer-Based Visual Feedback for Speech
Training for the Hearing Impaired”, Department of Electrical and Computer
Engineering, Old Dominion University Norfolk, Virginia 23529, USA.

[11] Powen Ru, Taishih Chi, and Shihab Shamma, “The synergy between
speech production and perception”, J. Acoust. Soc. Am. , January 2003 J.
Makhoul, “Linear Prediction: A Tutorial Review,” Proc. IEEE, Vol. 63,pp
561-580,1975.

[12] Thomas F. Quatieri, Discrete-time Speech Signal Processing,


Principles and Practice, Pearson Education, 2002.

[13] L. R. Rabiner and B. Gold, Theory and Applications of Digital Signal


Processing, Prentice-Hall, Inc., Englewood Cliffs, N.J.,1975.

AUTHORS PROFILE

Mr. Anil Kumar C, LMIETE,MIE &


LMISTE, currently pursuing Doctral
degree in speech signal processing
from Jain University, Bengaluru.
Completed B.E & M.Tech degree from
VTU Belgaum. Presently working as
Assistant Professor in dept of ECE at
RLJIT Doddaballapur with total
teaching experience of 9.5 years and
also successfully guided more than 20
UG projects and having more than 30
national and 10 international
publications till date also participating
various workshops and faculty
development programme

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 81


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

A study of various approaches for enhancement


of foggy/hazy images
Nandini B.M, Mohanesh B.M Narasimha Kaulgud
The National Institute of Engineering, Mysuru, The National Institute of Engineering, Mysuru,
Karnataka, India. Karnataka, India.
nandinibm@nie.ac.in, mohaneshbm@gmail.com narasimha.kaulgud@nie.ac.in,

Abstract—Computer vision applications such as Object


Detection, Outdoor Surveillance, Object Tracking,
Segmentation, consumer electronics and many more require
restoration of images captured in foggy environment.
Fog/haze is formed as a result of environment attenuation and
air light (scattering of light) resulting in image degradation
since the contrast of the scene is reduced by attenuation while
the whiteness in the scene is increased by airlight. Hence, the
objective of fog removal algorithms is to recover the color and (b) recovered image
contrast of the scene. Also, formation of fog is the function of a) degraded image using Dark Channel Prior
the depth and estimation of depth information requires Figure 1. Example
assumptions or prior information of the single image. Hence,
with various assumptions on the single image, fog removal Several algorithms have been proposed to restore and/or
algorithms estimate the depth information, which are enhance the quality of images taken under foggy
discussed in this paper. conditions. The problem cannot be solved with a single
Keywords—Fog removal, Image restoration, Wiener filter, foggy image as the only input, because of ill-posed nature
Dark Channel Prior, Bilateral filter, Object detection, depth of problem. To overcome this, several methods are
estimation. proposed either by using multiple images or by single
images with additional information. For example, in the
I. INTRODUCTION Polarization based approaches [1] the effect of fog is
removed through multiple images captured with various
The weather conditions like haze, fog, rain, snow or
degrees of Polarization whereas Depth based approaches
smoke results in multifaceted visual effects of various
[3], [5] necessitate the rough depth information from the
domains in images or videos. These factors considerably
user inputs.
decrease the performance of outdoor vision methods which
rely on feature extraction of Visual in images/videos, Event The most common method to enhance the degraded
Detection, Object Detection, Scene analysis and images is via Dark Channel Prior. It is very simple yet
classification, Tracking and recognition, Image Indexing powerful and is grounded on the statistics of haze-free
and retrieval. Furthermost Images captured in outdoor outdoor images. However, the dark channel prior may be
under bad weather conditions have poor contrast because invalid whenever the scene object is characteristically alike
under bad climatic conditions the light that reaches the to the airlight in a large local region and there is no shadow
camera is scattered and the image captured gets degraded casting on the object [6]. Fig.1b shows an image recovered
due to additive light. Additive light is formed from using Dark Channel prior [6].
scattering of light by fog particles. This is also called as Air
light. Air light is not uniformly distributed in the scene As a bi-product the fog removal algorithms can also
under bad weather conditions. This leads to poor visibility produce depth information of the objects in the scene
and hence degrades the perceptual image quality and which can be beneficial for image editing, vision
performance of computer vision applications. The image algorithms and can be clue for understanding a scene.
degradation is spatial-variant, as the amount of light Hence, a foggy image could be used for other worthy
scattered depends on the distance of camera and various procedure also! Nevertheless, fog removal is a thought-
scene points. The degraded image with low contrast and provoking problem since the fog is dependent on the
low color fidelity is shown in Fig.1a [6]. unknown depth information of the scene.
II. IMAGE RESTORATION TECHNIQUES1
For restoring degraded images captured in bad weather
conditions, various restoration techniques are used. Typical
image restoration techniques are:

1 Notations are as per the original paper

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 82


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

A. Physical Model distances [13], [14]) and few other approaches are
established on specific radiation sources and detection
This is one of the early techniques for enhancing hardware [15], [16]. In this method the image formation
degraded imaged due to bad weather. There are several process is modeled considering polarization effects of light
physics based models for fog removal. We shall review a scattering in haze. Further, this modeling is then used to
few models; beginning with the scattering models the improve the foggy scene while fetching information about
Dichromatic Atmospheric Scattering model [11] and the atmospheric properties and scene structure. This technique
Contrast or Monochrome model [12] which describes the incorporates the datum that even a partial polarization of
colors and contrasts of a scene captured under bad (foggy) the airlight could be made use in post-processing of hazy
weather conditions. According to the dichromatic image to remove scattering effects. The two unknowns of
atmospheric scattering model [11] the color of a scene
an image namely, object radiance (without haze) and the
point in fog or haze captured by a color camera, is given
airlight , are estimated from two images captured at
by a linear combination of direction of airlight( in foggy nearly unknown orientations of the polarizing filter. The
condition) and the direction of the color of the scene polarization filter is oriented parallel to the plane of
point in a unblemished and clear day. incidence to measure the intensity
= ̂ + , ( ) (6)
∥ = /2 + ∥ , where ∥ = ,
= , = 1− , ≡ ( ⊥ − ∥)/

= 1− (1) (Here is the airlight conforming to an object at an


Where ∞ is the brightness of sky, is the scene point infinite distance, is degree of Polarization, ∥ and ⊥ are
radiance on a clear day, is the atmosphere scattering airlight components that parallel and perpendicular to the
coefficient and d is the scene point depth. plane respectively. is depth of the scene point). Further,
the polarization filter is oriented perpendicular to the plane
Whereas in the contrast or monochrome model [12] the of incidence to measure the other intensity,
intensity of a scene point captured by a monochrome ( )
camera under bad weather conditions is given by: ⊥ = + ⊥, where ⊥ = . (7)

= + 1− (2) From Equations (6) and (7), the airlight of any point can be
From both of the models we can infer that the contrast and estimated as
color of a scene point degrade exponentially with its depth ∥ ⊥
(8)
from the observer. These models are used in the design of = , where ∥ and ⊥ are estimated intensities
inter-active techniques like Dichromatic Color Transfer
and Depth Heuristics for image deweathering using simple and the unpolarized image as
inputs from the user. In Dichromatic Color Transfer a bad Total = ∥ + ⊥ (9)
pixel is replaced by a best matching good pixel using the
input by the user: The attenuation is estimated as

= ι + (3) (10)
=1−
Here, the color of the pixel is replaced by dehazed color ∞

i. One limitation of the color transfer method is that it Hence, an estimate for the scene radiance in absence of
can be used only to restore color images since all colors in atmospheric scattering is given by
the selected degraded region of an image might not have Total − (11)
equivalent colors in the selected good color region. In =
Depth Heuristics technique the input is an approximate −
minimum and maximum distances and interpolate for
points in between. is hence the dehazed image.
= min + ( max − min), C. The Cost Estimation Method (4)
The effects of dissimilar densities of hazy weather The common limitations of polarization method is that it
(moderate, heavy, etc.) can be produced by different values is not suitable for dynamic scenes in which the changes are
of the scattering coefficient. Thus, by changing more rapid than the filter rotation for finding the maximum
continuously, the clear day radiance at each pixel can be and minimum DOP(Degree of Polarization) and needs user
estimates progressively as, interaction. Whereas this method requires only a single
input image and neither requires neither any user
= [ − (1 − )] (5) interactions nor any geometrical information of the input
image. An overview of the method is as follows: For an
B. Polarization Method input image, the atmospheric light L1 is estimated using
Image enhancement methods proposed earlier to this ( ) = ∞ ( ) ( )
+ ∞(1 − ( )
) (12)
method require prior information about the scene (e.g.,

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 83


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

(The first term in the equation is the direct attenuation and 4. The contrast limited histogram of the contextual region is
the second term is the airlight), then from this the light given by the set of rules:
chromaticity is obtained as
if HCR(i) > NCL; HNCR(i) = NCL;
∞c (13)
c = else if HCR(i) + Nacp ≥ NCL; HNCR(i) = NCL;
∞r + ∞g + ∞b
else HCR(i) = HCR(i) + Nacp
The light color of the input image is removed using the 5. The distributed pixel is given by
light chromaticity. Later, the smoothness cost and data cost
for every pixel are computed. Corresponding equations
used are:
=
1 (14)
( )
c( )= ( ) ( ) + ( ) 1
1 E. Dark Channel Prior
where and are color vectors, and The dark channel prior [6] [9] method for enhancement
of foggy image is based on the statistics of haze-free
are both scalar values. outdoor images. The dark pixels of the local regions (not
([ ]x ) (15) having the sky region), have very low intensity in at least
( x | x) = edges
one color (RGB) channel, whereas in a hazy image, the
intensity of these dark pixels in that channel is high and is
where is the data cost, [ ]x is obtained by plugging mainly contributed by the airlight. The dark channel prior
every value of x into Equation 14, is a constant to is given by the following statistical observation:
normalize edges and x is a small patch centered at location
. These data and smoothness costs build up complete ( )= min ( min ( ( ))), (19)
∈ , , ∈ ( )
Markov random fields (MRFs) [20] that is optimized using
the inference methods like graph-cuts or belief where is a color channel of an image and ( ) is a
propagation, and produce the estimated values of the local patch centered at . According to observation, it is
airlight. Based on this estimation, the direct attenuation of seen that apart from the sky region, the intensity of is
the scene with enhanced visibility is computed. This low and inclines to be zero when is a haze-free image.
method improves the image visibility by only enhancing With a haze image represented by Equation 12 and taking
the contrast and not the original colors of input image. the operation in the local patch on this equation, we
have:
D. CLAHE
min ( ( )) = ̃ ( ) min ( ) + (1 − ̃ ( )) (20)
Contrast Limited Adaptive Histogram Equalization ∈ ( ) ∈ ( )

(CLAHE)is a non-model-based algorithm. Pizer et al.[17] and this min operation is performed on three color channels
proposed the Contrast Limited Adaptive Histogram independently. By taking the minimum among these three
Equalization (CLAHE) method. CLAHE limits the noise and considering the fact that is always positive, we
enhancement by establishing a maximum value. The have:
CLAHE technique put on histogram equalization to a
contextual region such that every pixel of original image is min( min ( ( ))/ )=0 (21)
∈ ( )
in the center of the contextual region. In this technique, the
pixels of the clipped region of the original histogram are Using this the transmission ̃ is estimated from Eq.20 and
redistributed to every gray level. The CLAHE method is given by
involves the following steps: ̃ ( ) = 1 − min( min ( ( ))/ ) (22)
∈ ( )
1. Compute average number of pixels
Consequently, accurate estimation of the haze’s
× (16) transmission is obtained by these dark pixels. Therefore,
=
this method is substantially valid and is capable in handling
distant objects even for the heavy haze image. Also, the
2. Calculate the actual clip-limit method doesn’t depend on significant variance on
= × (17) transmission or surface shading in the input image. The
dark channel prior may be invalid whenever the scene
is the minimum multiple of average pixels in each object is characteristically alike to the airlight in a large
gray level of the contextual region. local region and there is no shadow casting on the object.
3. The number of pixels distributed averagely into each F. Wiener Filtering
gray level is The Wiener filter [21] is the significant method for
= ∑ / , where ∑
removal of blur in hazy/foggy images due to linear motion
is the total number of clipped pixels (18) or unfocused optics. Linear motion in a photograph causes
Blurring and results in poor sampling. Every pixel in a

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 84


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

digital image represents the intensity of a single fixed point where the normalized term
in front of the camera. Unluckily, a given pixel is mixture
of intensities from points along the line of the camera’s = (∥ ( ) − ( ) ∥) (∥ − ∥) (24)
motion, whenever the shutter speed is very slow while the ∈
camera is in motion. This is a 2D analogy to
ensures that the filter preserves image energy
( , ) = ( , ). ( , )
where is the original input image to be filtered;
where is the Fourier transform of an “ideal” version of a are the coordinates of the current pixel to be filtered;
given image, and is the blurring function. Ideally one is the window centered in ; is the range kernel for
smoothing differences in intensities. is the spatial kernel
could reverse-engineer a , or estimate, if and are
known. This technique is inverse filtering. The 2-d Fourier for smoothing differences in coordinates. The weight is
transform of H for motion is a series of functions in assigned using the spatial closeness and the intensity
parallel on a line perpendicular to the direction of motion; difference. Let (i,j) be the location of a pixel to be denoised
and the 2-d Fourier transform of for focus blurring is the in an image and let its neighboring pixel be located at
sombrero function. ( , ), then, the weight assigned for the neighboring pixel
at ( , ) to denoise the chosen pixel ( , ) is given by:
However, in the reality, there are two problems with
( ) ( ) ∥ (, ) ( , )∥
reverse-engineer method. First, is not known accurately. ( )
The blurring function for a given situation can be guessed; (, , , )=
whereas a lot of trial and error is required for determination
of a good blurring function. Second, there is failure in Where and are smoothing parameters and ( , ) and
inverse filtering under some situations as the function ( , ) are the intensity of pixels ( , ) and ( , )
goes to 0 at some values of and . Also, noise in real respectively.
pictures becomes amplified such that it destroys all After calculating the weights normalize them.
attempts at reconstruction of . Hence, the preeminent
way to solve the second problem is to use Wiener filtering. ∑ , ( , )∗ ( , , , ) (26)
(, )=
This tool solves an estimate for according to the ∑ , (, , , )
following equation (derived from a least squares method):
where is the denoised intensity of pixel ( , ) . This
( , ) bilateral filter is used in [8] to refine the airlight map
( , ) = | ( , )| ×
| ( , )| × ( , ) + ( , ) estimated as a result of histogram equalization over foggy
image. Once airlight map is obtained, image is restored
where is a constant chosen to optimize the estimate. using Koschmieders law. Later Histogram stretching of
Wiener filters are the most common deblurring technique output image is performed to get final defoggy image.
used because it mathematically returns the best results.
Whereas, in other filtering like median filtering the media H. Color Attenuation Prior
function can be accurately estimated and can be used to Color attenuation prior [10] is a simple yet dominant
recover the clear picture based on dark channel prior, but method for haze removal from a single hazy image. In a
with a distortion due to disposal with the dark channel and Hazy image, whenever there is variation in haze
relatively large depth of image area is easily confused with concentration, the saturation and brightness of a pixel vary
area of the sky. Hence, details of the object might be erased sharply. When the color of the scene declines due to the
and great contrast occurs to the brightness level of image. influence of haze/fog, its saturation decreases sharply and
Therefore, in order to overcome these disadvantages, at the same time there is increase in its brightness resulting
Wiener filtering based on dark colors processing is used in in high value of difference. This difference between the
[7] to defog images with best results. saturation and the brightness is exploited in the estimation
G. Bilateral Filtering of the haze concentration. Subsequently, this increase in
concentration of haze along with the change of the scene
A bilateral filter is a smoothing filter for images. Also it depth leads to an assumption that the depth of the scene is
is non-linear and edge-preserving and noise-reducing. In a positively correlated with the concentration of the haze,
bilateral filter, the intensity value of every pixel in an hence:
image is replaced by the weighted average of intensity
values of its nearby pixels, where the weights are based on ( )∝ ( )∝ ( )− ( ) (27)
Gaussian distribution. Significantly, these weights depend Here represents the scene depth, represents the
on radiometric differences such as range differences (e.g.,
concentration of the haze, represents the brightness of the
color intensity, depth distance, etc.) as well as on Euclidean
scene and represents the saturation. The above statistics
distance of pixels. Hence, sharp edges are preserved by
is viewed as color attenuation prior. A linear model i.e., a
methodically looping through each pixel and correcting
more accurate expression is as follows:
weights to the adjacent pixels consequently. The bilateral
filter is defined as ( ) = + ( ) + ( ) + ( )
( )= ∑ ∈ ( ) (∥ ( ) − ( ) ∥) (∥ − ∥), (23) where is the position within the image, , , are the
unknown linear coefficients, ( ) is a random variable

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 85


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

representing the random error of the model. According to The algorithm


This method is a recovery
the Gaussian distribution, we have: recompenses for the
method, which is a
deficiency of dark
combination of the
( ) ( ( )| , , , , ) (29) channel prior algorithm
statistical characteristics
there by expanding the
= ( + + , ) Wiener Filtering
use of dark channel prior
of the atmospheric
dissipative function and
algorithm and also
where Gaussian density is used for with zero mean and condenses the running
noise. Hence, needs noise
variable. A significant benefit of this model is that it to be added into the fog
time of the image
image model.
exhibits edge-preserving property. algorithm.

Proposed algorithm can Restored image may have


III. COMPARATIVE STUDY be used as a pre- low contrast hence
processing step for histogram stretching is
The following table gives a comparative study on the Bilateral Filtering numerous computer required.
above mentioned techniques: vision algorithms which
are based on feature
TABLE I. COMPARATIVE STUDY information.

Enhancement The dehazing algorithms


Applications Limitations which are based on the
Method Can be used for object
atmospheric scattering
These models are used in Color detection, tracking,
model are prone to
the design of inter-active Attenuation Prior segmentation and
underestimating the
techniques like recognition.
transmission in some
Dichromatic Color Requires user interaction cases.
Transfer and Depth as it is not dynamic in
Physical Model Heuristics for image nature.
deweathering using Table I. Comparison between various image enhancing
simple inputs from the methods
user.
This technique produces Its stability will decrease IV. LITERATURE SURVEY
a depth map of the scene with decrease in the
and statistics about degree of polarization. Many research works has been done and is still going on
atmospheric particle Hence, the method is less
Polarization properties. effective under an
to enhance degraded images due to bad weather. Many of
Method overcast sky. them have resulted in developing methods for dehazing the
Results of the method images and in the process have developed other features
could be the source for The method may fail in
tools in photography and situations of foggy or very
like depth estimation, object detection etc. Here we have
remote sensing. dense hazy weather. made a bantam effort to review few of the works done on
dehazing of images.
This method requires no
geometrical information 1. Y.Y. Schechner et al.[1] proposed a novel approach then,
of the input image, and
can be applied for gray The method has to easily remove the effects of haze from images.
The Cost as well as color images. constraints on depth Technique used: is Polarization method. Methodology: It is
Estimation discontinuities and the based on polarity of airlight scattered by atmospheric
Method The method could be results incline to have
applied to improve
particles. This method works under a wide range of
larger saturation values.
visibility in under-water atmospheric and viewing conditions by Polarization
or other muddled media filtering for haze removal from the image. Polarization
with same optical model. effects of atmospheric scattering are taken into account for
Additional time is spent in the analysis of image formation process. Further, later
This method doesn’t converting the original invert the process to enable the removal of haze from
require any prior image from RGB color images. This technique could be used with only two images
weather information, so space to HSI color space;
CLAHE the method can be and again in converting
taken through a polarizer at different orientations. This
applied to the image the processed image (as a method works instantaneously and doesn’t rely on changes
captured in the real result of CLAHE) from of weather conditions. Inference: This technique leads to in
foggy conditions. HSI back to RGB color a significant improvement of scene contrast and color. As a
space.
result, this technique also produces a depth map of the
This may not work for scene and statistics about atmospheric particle properties.
some particular images in Further, these results could be the source for useful tools in
which objects of the scene
This prior could be used
are characteristically photography and remote sensing.
with any haze imaging
analogous to the
Dark Channel
model to estimate
atmospheric light and no
2. S.G. Narasimhan et al.[12] addressed the problem of
directly the thickness of deweathering an image with additional information
Prior shadow is falling on them.
the haze and recuperate a
high quality haze/fog provided interactively by the user i.e., deweathering
The method
free image. underestimates the without using precise weather or depth information
transmission for objects .Technique used: Physics model. Methodology: The
such as the white marble. proposed method exploited the physics-based models of
prior work and resulted in three interactive algorithms:

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 86


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dichromatic Color Transfer, Deweathering using Depth a cost function was developed in the framework of Markov
Heuristics and Restoration using Planar Depth segments, random fields, which could be proficiently optimized by
for adding and deletion of effects from a single image. various techniques, such as graph-cuts or belief
Inference: The color and contrast restoration on several propagation. Methodology: Firstly the direct attenuation
images captured under poor weather conditions were for a selected patch is computed using Eq.(12) ,then the
demonstrated effectively. Furthermore, the method data cost is computed using Eq.(14) .Later, the smoothness
demonstrates an example of adding weather effects to cost is computed using Eq.(15) .With these two costs a
given images. Hence, this interactive method for image comprehensive graph in term of MRF is found, later
enhancement can be used as easy-to use plugins in various inference in MRFs with the number of labels is done using
image processing software. The method can also be used to the Graph-cut algorithm. Lastly, the direct attenuation for
add weather effects to images. the entire image from the estimated airlight is computed
using Eq.(12) Inference: This method requires no
3. Sarit Shwartz et al.[3]. The primary part of this paper is geometrical information of the input image, and can be
separation of airlight and blind estimation. Technique used: applied for gray as well as color images. Hence the
Physics based. Methodology: This method makes use of proposed method is dynamic method that requires only a
mathematical tools established in the field of Blind Source single input image. Further, the method could be applied to
Separation (BSS), which is also known as Independent improve visibility in under-water [18] or other muddled
Component Analysis (ICA). Every scene recovery needs a media with same optical model. The proposed method can
subtraction of the airlight. Specifically, this could be be beneficial for applications like outdoor surveillance
accomplished by studying polarization-filtered images. systems, intelligent vehicle systems, remote sensing
Nevertheless, the salvage of degrade image requires systems, graphics editors, etc..
parameters of the airlight as input. Hence, the parameter p
(degree of polarization)is obtained by blind estimation, 5. Zhiyuan Xu et al.(2009)[5] proposed a Contrast Limited
which results in separation of A(Airlight) from D(Direct Adaptive Histogram Equalization (CLAHE)-based method
transmission) is as follows: to enhance the images degraded by fog. Technique used:
CLAHE. Methodology: This method establishes a
Given maximum value to clip the histogram and redistributes the
= , where = is the transmittance of clipped pixels equally to each gray-level. It can limit the
the atmosphere, noise while enhancing the image contrast. In this method,
firstly, the original image is converted from RGB to
Lobject is the object radiance and HSI(Hue Saturation Intensity). Secondly, the intensity
= (1 − ), component of the HSI image is processed by CLAHE.
Finally, the HSI image is converted back to RGB image.
we have the image radiance Inference: This method doesn’t require any prior weather
= + information, so the method can be applied to the image
captured in the real foggy conditions. The experimental
The degree of polarization of the airlight is defined as results show that there is a significant improvement in
image contrast enhancement of fog degraded images. In
− comparison with other methods, this method is more
=
simple and faster. In addition, the effect of this method is
It follows that satisfactory.
(1 − ) (1 + ) 6. K. He et al. (2010)[6] has proposed a simple yet effective
= + , = + image prior which is the dark channel prior to eliminate
2 2 2 2
fog/haze from a given single input image. Technique used:
Inference: The methodology in this paper is for blind Dark Channel Prior. Methodology: The dark channel prior
recovery of the parameter p (degree of polarization) which is a kind of prior which is based on statistics of haze-free
is needed for separating the airlight from the (clear) outdoor images. It is grounded on a significant
measurements; hence contrast of the image is recovered observation that maximum local patches in haze/fog-free
with neither user interaction nor existence of the sky in the images (outdoor) contain few pixels with very low
frame. As a result eliminates the need for user interaction intensities in at least one color channel. This prior could be
and other conditions needed for image dehazing. Further, used with any haze imaging model to estimate directly the
the work could be protracted to establishment of blind thickness of the haze and recuperate a high quality
attenuation estimation. This work can be extended to other haze/fog free image. The dark channel prior for an image J
scattering modalities (e.g., underwater photography). is given by the Eq.19.
4. Robby T. Tan [4] proposed a method that is based on two 7. Yanjuan Shuaiet al.[7] has presented an image haze/fog
elementary observations: first, images with superior removal system by wiener filtering based on dark channel
visibility i.e., clear-day images have more contrast when prior aiming at color distortion problem for some large
compared with images afflicted due to bad weather; white bright area in the image due to use of image haze
second, airlight tends to be smooth whenever its variation removal by dark channel prior. Technique used: Wiener
depends on the distance of objects to the viewer. Technique filtering based on dark channel prior. Methodology: The
used: Cost function. Depending on these two observations, algorithm mainly estimates the median function in the use

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 87


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

of the media filtering method based on the dark channel, to Proposed algorithm can be used as a pre-processing step
make the media function more accurate and combine with for numerous computer vision algorithms which are based
the wiener filtering closer; so that the image restoration on feature information (for example, tracking,
problem becomes an optimization problem, and by segmentation, object detection).
minimizing mean-square error a clearer, fogless image is
obtained finally. Inference: Experimental results show that V. FUTURE SCOPE
the proposed algorithm can make the image more detailed,
the contour smoother and the whole image clearer. In Nearly all the prevailing dehazing algorithms are based
particular, this algorithm can recover the contrast of a huge on the persistent assumption; hence a supple model is
white area fog image. The algorithm recompenses for the greatly preferred. Only limited approaches [19] have been
deficiency of dark channel prior algorithm there by implemented by using haze density analysis. Such
expanding the use of dark channel prior algorithm and also approaches could be applied in video dehazing for driving
condenses the running time of the image algorithm. The safety assistance device. More focus of fog removal
steps involved for dehazing an image using wiener filtering algorithms should be produce information on features for
based on dark channel prior: tracking, object detection, segmentation and recognition.
Methods have been implemented for extracting only
Step 1: Input the fog image ( ) objects of interest at foreground from degraded images,
Step 2: Use dark colors method to estimate the value A of now the implementation can be extended for extracting
the sky brightness background images and their depths with respect to the
Step 3: Use median filtering method to estimate the value t seen. Nevertheless work can be focused on CLAHE
of the media function method along with color attenuation prior to reduce the
Step 4: and are substituted into Eq.(30) to get the image noise in dehazed image while retaining the contrast and
′( ) through dark colors color of the image.
( )− (30)
( )=
max ( ( ), ̂ VI. CONCLUSION
Images taken in bad weather (for example foggy or
Step 5: The image ′( ) is processed by wiener filtering. hazy) usually lose contrast and reliability. This is because
Step 6: Output the image of the fact that light is absorbed and scattered by particles
( ) = ′( ) & water droplets in the atmosphere during the process of
8. Tripathi A.K. et al.[8] proposed an system using bilateral propagation. Besides, even automatic systems such as
filter for the estimation of airlight and scene contrast Visual Attention Modeling, which toughly depend on the
recovery. Technique used: Bilateral filter. By quantitative description of the input images, fail to work normally since
and qualitative analysis of the proposed algorithm it can be input images are degraded. Consequently, the techniques
verified that it performs better than that of prior state of for image haze removal must be improved there by
the art algorithms like Dark Channel Prior. Methodology: benefiting many image understanding and computer vision
Steps of the proposed for removal algorithm: applications for example Image/video retrieval, Remote
Sensing, Image Classification and video analysis &
Step 1: Input the image . recognition. Subsequently concentration of the haze varies
Step 2: Perform pre-processing by Histogram equalization from place to place and it is tough to detect in a hazy
over the foggy image. This results in better estimation of image, thus image dehazing is a thought-provoking and
airlight . stimulating task.

Step 3: Airlight map estimation using Dark Channel prior. ACKNOWLEDGMENT


( , ) = min ( , )) (31)
( , , ) We thank our organization in supporting us in preparation
Step 4: Airlight map refinement using Bilateral filter. of the paper by providing required resources. We thank all
the people for their moral support in preparing the
Step 5: Once the airlight map ( , ) is estimated then document.
each color component of the defoggy image ( , ) is
restored as REFERENCES
( ( , , )− ( , ) (32) [1] Y. Y.Schechner, S. G.Narasimhan and Shree K.Nayar,
( , , ) = ”Instant Dehazing of Images Using Polarization”, In
( , )
1− Proc.CVPR, 2001.
( )
Step 6: Post processing by histogram equalization and [2] S.G. Narasimhan and Shree K.Nayar, ”Interactive
histogram stretching to get the final output image. (De)Weathering of an Image using Physical Models”,
Inference: The proposed algorithm doesn’t require any user IEEE workshop on Color and Photometric Method in
intervention since it is independent of the density of fog Computer Vision,2003.
and can handle both color and gray images. Even in case of
[3] S.Shwartz, E. Namer and Y. Schechner, ”Blind Haze
heavy fog, proposed algorithm performs well, as algorithm
Separation”, In Proc.CVPR, 2006.
is independent of the density of fog present in the image.

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 88


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

[4] Robby T. Tan, ”Visibility in Bad Weather from a Single


Image”, Computer Vision and Pattern Recognition, 2008.
[5] Z. Xu, X. Liu and Na Ji, ” Fog Removal from Color
Images using Contrast Limited Adaptive Histogram
Equalization”, In Proc. CVPR, 2009.
[6] K. He, J. Sun and X. Tang, Single image haze removal
using dark channel Prior, IEEE Trans. Pattern Anal. Mach.
Intell. Vol. 33, pp. 23412353, 2010.
[7] Y.Shuai, Rui Liu and W. He,”Image Haze Removal of
Wiener Filtering Based on Dark Channel Prior”,In
Proc.CVPR,2012
[8] Tripathi A. K., and S. Mukhopadhyay, ”Single image fog
removal using bilateral filter”, Signal Processing,
Computing and Control IS-PCC,International Conference
on IEEE, 2012.
[9] T.H Kil, S.H.Lee and N.I. Cho, ”A dehazing algorithm
using Dark Channel Prior and Contrast Enhancement”, In
Proc.CVPR,2013.
[10] Q.Zhu, J. Mai and Ling Shao,”A fast single image haze
removal algorithm using Color Attenuation Prior”, IEEE
transactions on Image Processing,2015.
[11] S.G. Narasimhan and S.K. Nayar,” Vision and the
atmosphere”,IJCV, 48(3):233254, August 2002.
[12] S.G. Narasimhan and S.K. Nayar,” Contrast restoration of
weather degraded images”, PAMI, 25(6), June 2003.
[13] J. P. Oakley, and B. L. Satherley, ”Improving image
quality in poor visibility conditions using a physical model
for contrast degradation”, IEEE Trans. I m g . Proc. 7, pp.
167-179(1998).
[14] K. Tan and J. P. Oakley, ”Enhancement of color images in
poor visibility conditions”, Proc. KIP pp. 788-791 (2000).
[15] P. Pencipowski, ”A low cost vehicle-mounted enhanced
vision system comprised of a laser illuminator and range-
gated camera”, Proc. SPIE 2736 Enhanced and synthetic
vision, pp. 222-227 (1996).
[16] B. T. Sweet and C. Tiana, ”Image processing and fusion
for landing guidance”, Proc. SPIE 2736 Enhanced and
synthetic vision, pp. 84-95 (1996).
[17] S. M. Pizer etc., ”Adaptive Histogram Equalization and Its
Variations”, Computer vision, graphics, and image
processing, 1987, pp. 335-368.
[18] Y. Schechner and N. Napel, ”Clear underwater vision”,
CVPR,2004.
[19] Chia-Hung Yeh, Li-Wei Kang, Ming-Sui Lee and Cheng-
Yang Lin, ”Haze effect removal from image via haze
density estimation in optical model”,OSA,2013.
[20] S Geman and D Geman, “Stochastic Relation, Gibbs
distribution and Bayesian restoration of Images”, IEEE
PAM1;V-6, N0.6,7421-741.
[21] https://en.wikipedia.org/wiki/Wiener_filter

International Conference on Advances in Computational Intelligence and Communication (CIC 2016) 89


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

IJCSIS REVIEWERS’ LIST


Assist Prof (Dr.) M. Emre Celebi, Louisiana State University in Shreveport, USA
Dr. Lam Hong Lee, Universiti Tunku Abdul Rahman, Malaysia
Dr. Shimon K. Modi, Director of Research BSPA Labs, Purdue University, USA
Dr. Jianguo Ding, Norwegian University of Science and Technology (NTNU), Norway
Assoc. Prof. N. Jaisankar, VIT University, Vellore,Tamilnadu, India
Dr. Amogh Kavimandan, The Mathworks Inc., USA
Dr. Ramasamy Mariappan, Vinayaka Missions University, India
Dr. Yong Li, School of Electronic and Information Engineering, Beijing Jiaotong University, P.R. China
Assist. Prof. Sugam Sharma, NIET, India / Iowa State University, USA
Dr. Jorge A. Ruiz-Vanoye, Universidad Autónoma del Estado de Morelos, Mexico
Dr. Neeraj Kumar, SMVD University, Katra (J&K), India
Dr Genge Bela, "Petru Maior" University of Targu Mures, Romania
Dr. Junjie Peng, Shanghai University, P. R. China
Dr. Ilhem LENGLIZ, HANA Group - CRISTAL Laboratory, Tunisia
Prof. Dr. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, Indore, MP, India
Dr. Jorge L. Hernández-Ardieta, University Carlos III of Madrid, Spain
Prof. Dr.C.Suresh Gnana Dhas, Anna University, India
Dr Li Fang, Nanyang Technological University, Singapore
Prof. Pijush Biswas, RCC Institute of Information Technology, India
Dr. Siddhivinayak Kulkarni, University of Ballarat, Ballarat, Victoria, Australia
Dr. A. Arul Lawrence, Royal College of Engineering & Technology, India
Dr. Wongyos Keardsri, Chulalongkorn University, Bangkok, Thailand
Dr. Somesh Kumar Dewangan, CSVTU Bhilai (C.G.)/ Dimat Raipur, India
Dr. Hayder N. Jasem, University Putra Malaysia, Malaysia
Dr. A.V.Senthil Kumar, C. M. S. College of Science and Commerce, India
Dr. R. S. Karthik, C. M. S. College of Science and Commerce, India
Dr. P. Vasant, University Technology Petronas, Malaysia
Dr. Wong Kok Seng, Soongsil University, Seoul, South Korea
Dr. Praveen Ranjan Srivastava, BITS PILANI, India
Dr. Kong Sang Kelvin, Leong, The Hong Kong Polytechnic University, Hong Kong
Dr. Mohd Nazri Ismail, Universiti Kuala Lumpur, Malaysia
Dr. Rami J. Matarneh, Al-isra Private University, Amman, Jordan
Dr Ojesanmi Olusegun Ayodeji, Ajayi Crowther University, Oyo, Nigeria
Dr. Riktesh Srivastava, Skyline University, UAE
Dr. Oras F. Baker, UCSI University - Kuala Lumpur, Malaysia
Dr. Ahmed S. Ghiduk, Faculty of Science, Beni-Suef University, Egypt
and Department of Computer science, Taif University, Saudi Arabia
Dr. Tirthankar Gayen, IIT Kharagpur, India
Dr. Huei-Ru Tseng, National Chiao Tung University, Taiwan
Prof. Ning Xu, Wuhan University of Technology, China
Dr Mohammed Salem Binwahlan, Hadhramout University of Science and Technology, Yemen
& Universiti Teknologi Malaysia, Malaysia.
Dr. Aruna Ranganath, Bhoj Reddy Engineering College for Women, India
Dr. Hafeezullah Amin, Institute of Information Technology, KUST, Kohat, Pakistan

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Prof. Syed S. Rizvi, University of Bridgeport, USA


Dr. Shahbaz Pervez Chattha, University of Engineering and Technology Taxila, Pakistan
Dr. Shishir Kumar, Jaypee University of Information Technology, Wakanaghat (HP), India
Dr. Shahid Mumtaz, Portugal Telecommunication, Instituto de Telecomunicações (IT) , Aveiro, Portugal
Dr. Rajesh K Shukla, Corporate Institute of Science & Technology Bhopal M P
Dr. Poonam Garg, Institute of Management Technology, India
Dr. S. Mehta, Inha University, Korea
Dr. Dilip Kumar S.M, Bangalore University, Bangalore
Prof. Malik Sikander Hayat Khiyal, Fatima Jinnah Women University, Rawalpindi, Pakistan
Dr. Virendra Gomase , Department of Bioinformatics, Padmashree Dr. D.Y. Patil University
Dr. Irraivan Elamvazuthi, University Technology PETRONAS, Malaysia
Dr. Saqib Saeed, University of Siegen, Germany
Dr. Pavan Kumar Gorakavi, IPMA-USA [YC]
Dr. Ahmed Nabih Zaki Rashed, Menoufia University, Egypt
Prof. Shishir K. Shandilya, Rukmani Devi Institute of Science & Technology, India
Dr. J. Komala Lakshmi, SNR Sons College, Computer Science, India
Dr. Muhammad Sohail, KUST, Pakistan
Dr. Manjaiah D.H, Mangalore University, India
Dr. S Santhosh Baboo, D.G.Vaishnav College, Chennai, India
Prof. Dr. Mokhtar Beldjehem, Sainte-Anne University, Halifax, NS, Canada
Dr. Deepak Laxmi Narasimha, University of Malaya, Malaysia
Prof. Dr. Arunkumar Thangavelu, Vellore Institute Of Technology, India
Dr. M. Azath, Anna University, India
Dr. Md. Rabiul Islam, Rajshahi University of Engineering & Technology (RUET), Bangladesh
Dr. Aos Alaa Zaidan Ansaef, Multimedia University, Malaysia
Dr Suresh Jain, Devi Ahilya University, Indore (MP) India,
Dr. Mohammed M. Kadhum, Universiti Utara Malaysia
Dr. Hanumanthappa. J. University of Mysore, India
Dr. Syed Ishtiaque Ahmed, Bangladesh University of Engineering and Technology (BUET)
Dr Akinola Solomon Olalekan, University of Ibadan, Ibadan, Nigeria
Dr. Santosh K. Pandey, The Institute of Chartered Accountants of India
Dr. P. Vasant, Power Control Optimization, Malaysia
Dr. Petr Ivankov, Automatika - S, Russian Federation
Dr. Utkarsh Seetha, Data Infosys Limited, India
Mrs. Priti Maheshwary, Maulana Azad National Institute of Technology, Bhopal
Dr. (Mrs) Padmavathi Ganapathi, Avinashilingam University for Women, Coimbatore
Assist. Prof. A. Neela madheswari, Anna university, India
Prof. Ganesan Ramachandra Rao, PSG College of Arts and Science, India
Mr. Kamanashis Biswas, Daffodil International University, Bangladesh
Dr. Atul Gonsai, Saurashtra University, Gujarat, India
Mr. Angkoon Phinyomark, Prince of Songkla University, Thailand
Mrs. G. Nalini Priya, Anna University, Chennai
Dr. P. Subashini, Avinashilingam University for Women, India
Assoc. Prof. Vijay Kumar Chakka, Dhirubhai Ambani IICT, Gandhinagar ,Gujarat
Mr Jitendra Agrawal, : Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal
Mr. Vishal Goyal, Department of Computer Science, Punjabi University, India
Dr. R. Baskaran, Department of Computer Science and Engineering, Anna University, Chennai

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assist. Prof, Kanwalvir Singh Dhindsa, B.B.S.B.Engg.College, Fatehgarh Sahib (Punjab), India
Dr. Jamal Ahmad Dargham, School of Engineering and Information Technology, Universiti Malaysia Sabah
Mr. Nitin Bhatia, DAV College, India
Dr. Dhavachelvan Ponnurangam, Pondicherry Central University, India
Dr. Mohd Faizal Abdollah, University of Technical Malaysia, Malaysia
Assist. Prof. Sonal Chawla, Panjab University, India
Dr. Abdul Wahid, AKG Engg. College, Ghaziabad, India
Mr. Arash Habibi Lashkari, University of Malaya (UM), Malaysia
Mr. Md. Rajibul Islam, Ibnu Sina Institute, University Technology Malaysia
Professor Dr. Sabu M. Thampi, .B.S Institute of Technology for Women, Kerala University, India
Mr. Noor Muhammed Nayeem, Université Lumière Lyon 2, 69007 Lyon, France
Dr. Himanshu Aggarwal, Department of Computer Engineering, Punjabi University, India
Prof R. Naidoo, Dept of Mathematics/Center for Advanced Computer Modelling, Durban University of Technology,
Durban,South Africa
Prof. Mydhili K Nair, Visweswaraiah Technological University, Bangalore, India
M. Prabu, Adhiyamaan College of Engineering/Anna University, India
Mr. Swakkhar Shatabda, United International University, Bangladesh
Dr. Abdur Rashid Khan, ICIT, Gomal University, Dera Ismail Khan, Pakistan
Mr. H. Abdul Shabeer, I-Nautix Technologies,Chennai, India
Dr. M. Aramudhan, Perunthalaivar Kamarajar Institute of Engineering and Technology, India
Dr. M. P. Thapliyal, Department of Computer Science, HNB Garhwal University (Central University), India
Dr. Shahaboddin Shamshirband, Islamic Azad University, Iran
Mr. Zeashan Hameed Khan, Université de Grenoble, France
Prof. Anil K Ahlawat, Ajay Kumar Garg Engineering College, Ghaziabad, UP Technical University, Lucknow
Mr. Longe Olumide Babatope, University Of Ibadan, Nigeria
Associate Prof. Raman Maini, University College of Engineering, Punjabi University, India
Dr. Maslin Masrom, University Technology Malaysia, Malaysia
Sudipta Chattopadhyay, Jadavpur University, Kolkata, India
Dr. Dang Tuan NGUYEN, University of Information Technology, Vietnam National University - Ho Chi Minh City
Dr. Mary Lourde R., BITS-PILANI Dubai , UAE
Dr. Abdul Aziz, University of Central Punjab, Pakistan
Mr. Karan Singh, Gautam Budtha University, India
Mr. Avinash Pokhriyal, Uttar Pradesh Technical University, Lucknow, India
Associate Prof Dr Zuraini Ismail, University Technology Malaysia, Malaysia
Assistant Prof. Yasser M. Alginahi, Taibah University, Madinah Munawwarrah, KSA
Mr. Dakshina Ranjan Kisku, West Bengal University of Technology, India
Mr. Raman Kumar, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India
Associate Prof. Samir B. Patel, Institute of Technology, Nirma University, India
Dr. M.Munir Ahamed Rabbani, B. S. Abdur Rahman University, India
Asst. Prof. Koushik Majumder, West Bengal University of Technology, India
Dr. Alex Pappachen James, Queensland Micro-nanotechnology center, Griffith University, Australia
Assistant Prof. S. Hariharan, B.S. Abdur Rahman University, India
Asst Prof. Jasmine. K. S, R.V.College of Engineering, India
Mr Naushad Ali Mamode Khan, Ministry of Education and Human Resources, Mauritius
Prof. Mahesh Goyani, G H Patel Collge of Engg. & Tech, V.V.N, Anand, Gujarat, India
Dr. Mana Mohammed, University of Tlemcen, Algeria
Prof. Jatinder Singh, Universal Institutiion of Engg. & Tech. CHD, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Mrs. M. Anandhavalli Gauthaman, Sikkim Manipal Institute of Technology, Majitar, East Sikkim
Dr. Bin Guo, Institute Telecom SudParis, France
Mrs. Maleika Mehr Nigar Mohamed Heenaye-Mamode Khan, University of Mauritius
Prof. Pijush Biswas, RCC Institute of Information Technology, India
Mr. V. Bala Dhandayuthapani, Mekelle University, Ethiopia
Dr. Irfan Syamsuddin, State Polytechnic of Ujung Pandang, Indonesia
Mr. Kavi Kumar Khedo, University of Mauritius, Mauritius
Mr. Ravi Chandiran, Zagro Singapore Pte Ltd. Singapore
Mr. Milindkumar V. Sarode, Jawaharlal Darda Institute of Engineering and Technology, India
Dr. Shamimul Qamar, KSJ Institute of Engineering & Technology, India
Dr. C. Arun, Anna University, India
Assist. Prof. M.N.Birje, Basaveshwar Engineering College, India
Prof. Hamid Reza Naji, Department of Computer Enigneering, Shahid Beheshti University, Tehran, Iran
Assist. Prof. Debasis Giri, Department of Computer Science and Engineering, Haldia Institute of Technology
Subhabrata Barman, Haldia Institute of Technology, West Bengal
Mr. M. I. Lali, COMSATS Institute of Information Technology, Islamabad, Pakistan
Dr. Feroz Khan, Central Institute of Medicinal and Aromatic Plants, Lucknow, India
Mr. R. Nagendran, Institute of Technology, Coimbatore, Tamilnadu, India
Mr. Amnach Khawne, King Mongkut’s Institute of Technology Ladkrabang, Ladkrabang, Bangkok, Thailand
Dr. P. Chakrabarti, Sir Padampat Singhania University, Udaipur, India
Mr. Nafiz Imtiaz Bin Hamid, Islamic University of Technology (IUT), Bangladesh.
Shahab-A. Shamshirband, Islamic Azad University, Chalous, Iran
Prof. B. Priestly Shan, Anna Univeristy, Tamilnadu, India
Venkatramreddy Velma, Dept. of Bioinformatics, University of Mississippi Medical Center, Jackson MS USA
Akshi Kumar, Dept. of Computer Engineering, Delhi Technological University, India
Dr. Umesh Kumar Singh, Vikram University, Ujjain, India
Mr. Serguei A. Mokhov, Concordia University, Canada
Mr. Lai Khin Wee, Universiti Teknologi Malaysia, Malaysia
Dr. Awadhesh Kumar Sharma, Madan Mohan Malviya Engineering College, India
Mr. Syed R. Rizvi, Analytical Services & Materials, Inc., USA
Dr. S. Karthik, SNS Collegeof Technology, India
Mr. Syed Qasim Bukhari, CIMET (Universidad de Granada), Spain
Mr. A.D.Potgantwar, Pune University, India
Dr. Himanshu Aggarwal, Punjabi University, India
Mr. Rajesh Ramachandran, Naipunya Institute of Management and Information Technology, India
Dr. K.L. Shunmuganathan, R.M.K Engg College , Kavaraipettai ,Chennai
Dr. Prasant Kumar Pattnaik, KIST, India.
Dr. Ch. Aswani Kumar, VIT University, India
Mr. Ijaz Ali Shoukat, King Saud University, Riyadh KSA
Mr. Arun Kumar, Sir Padam Pat Singhania University, Udaipur, Rajasthan
Mr. Muhammad Imran Khan, Universiti Teknologi PETRONAS, Malaysia
Dr. Natarajan Meghanathan, Jackson State University, Jackson, MS, USA
Mr. Mohd Zaki Bin Mas'ud, Universiti Teknikal Malaysia Melaka (UTeM), Malaysia
Prof. Dr. R. Geetharamani, Dept. of Computer Science and Eng., Rajalakshmi Engineering College, India
Dr. Smita Rajpal, Institute of Technology and Management, Gurgaon, India
Dr. S. Abdul Khader Jilani, University of Tabuk, Tabuk, Saudi Arabia
Mr. Syed Jamal Haider Zaidi, Bahria University, Pakistan

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dr. N. Devarajan, Government College of Technology,Coimbatore, Tamilnadu, INDIA


Mr. R. Jagadeesh Kannan, RMK Engineering College, India
Mr. Deo Prakash, Shri Mata Vaishno Devi University, India
Mr. Mohammad Abu Naser, Dept. of EEE, IUT, Gazipur, Bangladesh
Assist. Prof. Prasun Ghosal, Bengal Engineering and Science University, India
Mr. Md. Golam Kaosar, School of Engineering and Science, Victoria University, Melbourne City, Australia
Mr. R. Mahammad Shafi, Madanapalle Institute of Technology & Science, India
Dr. F.Sagayaraj Francis, Pondicherry Engineering College,India
Dr. Ajay Goel, HIET , Kaithal, India
Mr. Nayak Sunil Kashibarao, Bahirji Smarak Mahavidyalaya, India
Mr. Suhas J Manangi, Microsoft India
Dr. Kalyankar N. V., Yeshwant Mahavidyalaya, Nanded , India
Dr. K.D. Verma, S.V. College of Post graduate studies & Research, India
Dr. Amjad Rehman, University Technology Malaysia, Malaysia
Mr. Rachit Garg, L K College, Jalandhar, Punjab
Mr. J. William, M.A.M college of Engineering, Trichy, Tamilnadu,India
Prof. Jue-Sam Chou, Nanhua University, College of Science and Technology, Taiwan
Dr. Thorat S.B., Institute of Technology and Management, India
Mr. Ajay Prasad, Sir Padampat Singhania University, Udaipur, India
Dr. Kamaljit I. Lakhtaria, Atmiya Institute of Technology & Science, India
Mr. Syed Rafiul Hussain, Ahsanullah University of Science and Technology, Bangladesh
Mrs Fazeela Tunnisa, Najran University, Kingdom of Saudi Arabia
Mrs Kavita Taneja, Maharishi Markandeshwar University, Haryana, India
Mr. Maniyar Shiraz Ahmed, Najran University, Najran, KSA
Mr. Anand Kumar, AMC Engineering College, Bangalore
Dr. Rakesh Chandra Gangwar, Beant College of Engg. & Tech., Gurdaspur (Punjab) India
Dr. V V Rama Prasad, Sree Vidyanikethan Engineering College, India
Assist. Prof. Neetesh Kumar Gupta, Technocrats Institute of Technology, Bhopal (M.P.), India
Mr. Ashish Seth, Uttar Pradesh Technical University, Lucknow ,UP India
Dr. V V S S S Balaram, Sreenidhi Institute of Science and Technology, India
Mr Rahul Bhatia, Lingaya's Institute of Management and Technology, India
Prof. Niranjan Reddy. P, KITS , Warangal, India
Prof. Rakesh. Lingappa, Vijetha Institute of Technology, Bangalore, India
Dr. Mohammed Ali Hussain, Nimra College of Engineering & Technology, Vijayawada, A.P., India
Dr. A.Srinivasan, MNM Jain Engineering College, Rajiv Gandhi Salai, Thorapakkam, Chennai
Mr. Rakesh Kumar, M.M. University, Mullana, Ambala, India
Dr. Lena Khaled, Zarqa Private University, Aman, Jordon
Ms. Supriya Kapoor, Patni/Lingaya's Institute of Management and Tech., India
Dr. Tossapon Boongoen , Aberystwyth University, UK
Dr . Bilal Alatas, Firat University, Turkey
Assist. Prof. Jyoti Praaksh Singh , Academy of Technology, India
Dr. Ritu Soni, GNG College, India
Dr . Mahendra Kumar , Sagar Institute of Research & Technology, Bhopal, India.
Dr. Binod Kumar, Lakshmi Narayan College of Tech.(LNCT)Bhopal India
Dr. Muzhir Shaban Al-Ani, Amman Arab University Amman – Jordan
Dr. T.C. Manjunath , ATRIA Institute of Tech, India
Mr. Muhammad Zakarya, COMSATS Institute of Information Technology (CIIT), Pakistan

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assist. Prof. Harmunish Taneja, M. M. University, India


Dr. Chitra Dhawale , SICSR, Model Colony, Pune, India
Mrs Sankari Muthukaruppan, Nehru Institute of Engineering and Technology, Anna University, India
Mr. Aaqif Afzaal Abbasi, National University Of Sciences And Technology, Islamabad
Prof. Ashutosh Kumar Dubey, Trinity Institute of Technology and Research Bhopal, India
Mr. G. Appasami, Dr. Pauls Engineering College, India
Mr. M Yasin, National University of Science and Tech, karachi (NUST), Pakistan
Mr. Yaser Miaji, University Utara Malaysia, Malaysia
Mr. Shah Ahsanul Haque, International Islamic University Chittagong (IIUC), Bangladesh
Prof. (Dr) Syed Abdul Sattar, Royal Institute of Technology & Science, India
Dr. S. Sasikumar, Roever Engineering College
Assist. Prof. Monit Kapoor, Maharishi Markandeshwar University, India
Mr. Nwaocha Vivian O, National Open University of Nigeria
Dr. M. S. Vijaya, GR Govindarajulu School of Applied Computer Technology, India
Assist. Prof. Chakresh Kumar, Manav Rachna International University, India
Mr. Kunal Chadha , R&D Software Engineer, Gemalto, Singapore
Mr. Mueen Uddin, Universiti Teknologi Malaysia, UTM , Malaysia
Dr. Dhuha Basheer abdullah, Mosul university, Iraq
Mr. S. Audithan, Annamalai University, India
Prof. Vijay K Chaudhari, Technocrats Institute of Technology , India
Associate Prof. Mohd Ilyas Khan, Technocrats Institute of Technology , India
Dr. Vu Thanh Nguyen, University of Information Technology, HoChiMinh City, VietNam
Assist. Prof. Anand Sharma, MITS, Lakshmangarh, Sikar, Rajasthan, India
Prof. T V Narayana Rao, HITAM Engineering college, Hyderabad
Mr. Deepak Gour, Sir Padampat Singhania University, India
Assist. Prof. Amutharaj Joyson, Kalasalingam University, India
Mr. Ali Balador, Islamic Azad University, Iran
Mr. Mohit Jain, Maharaja Surajmal Institute of Technology, India
Mr. Dilip Kumar Sharma, GLA Institute of Technology & Management, India
Dr. Debojyoti Mitra, Sir padampat Singhania University, India
Dr. Ali Dehghantanha, Asia-Pacific University College of Technology and Innovation, Malaysia
Mr. Zhao Zhang, City University of Hong Kong, China
Prof. S.P. Setty, A.U. College of Engineering, India
Prof. Patel Rakeshkumar Kantilal, Sankalchand Patel College of Engineering, India
Mr. Biswajit Bhowmik, Bengal College of Engineering & Technology, India
Mr. Manoj Gupta, Apex Institute of Engineering & Technology, India
Assist. Prof. Ajay Sharma, Raj Kumar Goel Institute Of Technology, India
Assist. Prof. Ramveer Singh, Raj Kumar Goel Institute of Technology, India
Dr. Hanan Elazhary, Electronics Research Institute, Egypt
Dr. Hosam I. Faiq, USM, Malaysia
Prof. Dipti D. Patil, MAEER’s MIT College of Engg. & Tech, Pune, India
Assist. Prof. Devendra Chack, BCT Kumaon engineering College Dwarahat Almora, India
Prof. Manpreet Singh, M. M. Engg. College, M. M. University, India
Assist. Prof. M. Sadiq ali Khan, University of Karachi, Pakistan
Mr. Prasad S. Halgaonkar, MIT - College of Engineering, Pune, India
Dr. Imran Ghani, Universiti Teknologi Malaysia, Malaysia
Prof. Varun Kumar Kakar, Kumaon Engineering College, Dwarahat, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assist. Prof. Nisheeth Joshi, Apaji Institute, Banasthali University, Rajasthan, India
Associate Prof. Kunwar S. Vaisla, VCT Kumaon Engineering College, India
Prof Anupam Choudhary, Bhilai School Of Engg.,Bhilai (C.G.),India
Mr. Divya Prakash Shrivastava, Al Jabal Al garbi University, Zawya, Libya
Associate Prof. Dr. V. Radha, Avinashilingam Deemed university for women, Coimbatore.
Dr. Kasarapu Ramani, JNT University, Anantapur, India
Dr. Anuraag Awasthi, Jayoti Vidyapeeth Womens University, India
Dr. C G Ravichandran, R V S College of Engineering and Technology, India
Dr. Mohamed A. Deriche, King Fahd University of Petroleum and Minerals, Saudi Arabia
Mr. Abbas Karimi, Universiti Putra Malaysia, Malaysia
Mr. Amit Kumar, Jaypee University of Engg. and Tech., India
Dr. Nikolai Stoianov, Defense Institute, Bulgaria
Assist. Prof. S. Ranichandra, KSR College of Arts and Science, Tiruchencode
Mr. T.K.P. Rajagopal, Diamond Horse International Pvt Ltd, India
Dr. Md. Ekramul Hamid, Rajshahi University, Bangladesh
Mr. Hemanta Kumar Kalita , TATA Consultancy Services (TCS), India
Dr. Messaouda Azzouzi, Ziane Achour University of Djelfa, Algeria
Prof. (Dr.) Juan Jose Martinez Castillo, "Gran Mariscal de Ayacucho" University and Acantelys research Group,
Venezuela
Dr. Jatinderkumar R. Saini, Narmada College of Computer Application, India
Dr. Babak Bashari Rad, University Technology of Malaysia, Malaysia
Dr. Nighat Mir, Effat University, Saudi Arabia
Prof. (Dr.) G.M.Nasira, Sasurie College of Engineering, India
Mr. Varun Mittal, Gemalto Pte Ltd, Singapore
Assist. Prof. Mrs P. Banumathi, Kathir College Of Engineering, Coimbatore
Assist. Prof. Quan Yuan, University of Wisconsin-Stevens Point, US
Dr. Pranam Paul, Narula Institute of Technology, Agarpara, West Bengal, India
Assist. Prof. J. Ramkumar, V.L.B Janakiammal college of Arts & Science, India
Mr. P. Sivakumar, Anna university, Chennai, India
Mr. Md. Humayun Kabir Biswas, King Khalid University, Kingdom of Saudi Arabia
Mr. Mayank Singh, J.P. Institute of Engg & Technology, Meerut, India
HJ. Kamaruzaman Jusoff, Universiti Putra Malaysia
Mr. Nikhil Patrick Lobo, CADES, India
Dr. Amit Wason, Rayat-Bahra Institute of Engineering & Boi-Technology, India
Dr. Rajesh Shrivastava, Govt. Benazir Science & Commerce College, Bhopal, India
Assist. Prof. Vishal Bharti, DCE, Gurgaon
Mrs. Sunita Bansal, Birla Institute of Technology & Science, India
Dr. R. Sudhakar, Dr.Mahalingam college of Engineering and Technology, India
Dr. Amit Kumar Garg, Shri Mata Vaishno Devi University, Katra(J&K), India
Assist. Prof. Raj Gaurang Tiwari, AZAD Institute of Engineering and Technology, India
Mr. Hamed Taherdoost, Tehran, Iran
Mr. Amin Daneshmand Malayeri, YRC, IAU, Malayer Branch, Iran
Mr. Shantanu Pal, University of Calcutta, India
Dr. Terry H. Walcott, E-Promag Consultancy Group, United Kingdom
Dr. Ezekiel U OKIKE, University of Ibadan, Nigeria
Mr. P. Mahalingam, Caledonian College of Engineering, Oman
Dr. Mahmoud M. A. Abd Ellatif, Mansoura University, Egypt

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Prof. Kunwar S. Vaisla, BCT Kumaon Engineering College, India


Prof. Mahesh H. Panchal, Kalol Institute of Technology & Research Centre, India
Mr. Muhammad Asad, Technical University of Munich, Germany
Mr. AliReza Shams Shafigh, Azad Islamic university, Iran
Prof. S. V. Nagaraj, RMK Engineering College, India
Mr. Ashikali M Hasan, Senior Researcher, CelNet security, India
Dr. Adnan Shahid Khan, University Technology Malaysia, Malaysia
Mr. Prakash Gajanan Burade, Nagpur University/ITM college of engg, Nagpur, India
Dr. Jagdish B.Helonde, Nagpur University/ITM college of engg, Nagpur, India
Professor, Doctor BOUHORMA Mohammed, Univertsity Abdelmalek Essaadi, Morocco
Mr. K. Thirumalaivasan, Pondicherry Engg. College, India
Mr. Umbarkar Anantkumar Janardan, Walchand College of Engineering, India
Mr. Ashish Chaurasia, Gyan Ganga Institute of Technology & Sciences, India
Mr. Sunil Taneja, Kurukshetra University, India
Mr. Fauzi Adi Rafrastara, Dian Nuswantoro University, Indonesia
Dr. Yaduvir Singh, Thapar University, India
Dr. Ioannis V. Koskosas, University of Western Macedonia, Greece
Dr. Vasantha Kalyani David, Avinashilingam University for women, Coimbatore
Dr. Ahmed Mansour Manasrah, Universiti Sains Malaysia, Malaysia
Miss. Nazanin Sadat Kazazi, University Technology Malaysia, Malaysia
Mr. Saeed Rasouli Heikalabad, Islamic Azad University - Tabriz Branch, Iran
Assoc. Prof. Dhirendra Mishra, SVKM's NMIMS University, India
Prof. Shapoor Zarei, UAE Inventors Association, UAE
Prof. B.Raja Sarath Kumar, Lenora College of Engineering, India
Dr. Bashir Alam, Jamia millia Islamia, Delhi, India
Prof. Anant J Umbarkar, Walchand College of Engg., India
Assist. Prof. B. Bharathi, Sathyabama University, India
Dr. Fokrul Alom Mazarbhuiya, King Khalid University, Saudi Arabia
Prof. T.S.Jeyali Laseeth, Anna University of Technology, Tirunelveli, India
Dr. M. Balraju, Jawahar Lal Nehru Technological University Hyderabad, India
Dr. Vijayalakshmi M. N., R.V.College of Engineering, Bangalore
Prof. Walid Moudani, Lebanese University, Lebanon
Dr. Saurabh Pal, VBS Purvanchal University, Jaunpur, India
Associate Prof. Suneet Chaudhary, Dehradun Institute of Technology, India
Associate Prof. Dr. Manuj Darbari, BBD University, India
Ms. Prema Selvaraj, K.S.R College of Arts and Science, India
Assist. Prof. Ms.S.Sasikala, KSR College of Arts & Science, India
Mr. Sukhvinder Singh Deora, NC Institute of Computer Sciences, India
Dr. Abhay Bansal, Amity School of Engineering & Technology, India
Ms. Sumita Mishra, Amity School of Engineering and Technology, India
Professor S. Viswanadha Raju, JNT University Hyderabad, India
Mr. Asghar Shahrzad Khashandarag, Islamic Azad University Tabriz Branch, India
Mr. Manoj Sharma, Panipat Institute of Engg. & Technology, India
Mr. Shakeel Ahmed, King Faisal University, Saudi Arabia
Dr. Mohamed Ali Mahjoub, Institute of Engineer of Monastir, Tunisia
Mr. Adri Jovin J.J., SriGuru Institute of Technology, India
Dr. Sukumar Senthilkumar, Universiti Sains Malaysia, Malaysia

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Mr. Rakesh Bharati, Dehradun Institute of Technology Dehradun, India


Mr. Shervan Fekri Ershad, Shiraz International University, Iran
Mr. Md. Safiqul Islam, Daffodil International University, Bangladesh
Mr. Mahmudul Hasan, Daffodil International University, Bangladesh
Prof. Mandakini Tayade, UIT, RGTU, Bhopal, India
Ms. Sarla More, UIT, RGTU, Bhopal, India
Mr. Tushar Hrishikesh Jaware, R.C. Patel Institute of Technology, Shirpur, India
Ms. C. Divya, Dr G R Damodaran College of Science, Coimbatore, India
Mr. Fahimuddin Shaik, Annamacharya Institute of Technology & Sciences, India
Dr. M. N. Giri Prasad, JNTUCE,Pulivendula, A.P., India
Assist. Prof. Chintan M Bhatt, Charotar University of Science And Technology, India
Prof. Sahista Machchhar, Marwadi Education Foundation's Group of institutions, India
Assist. Prof. Navnish Goel, S. D. College Of Enginnering & Technology, India
Mr. Khaja Kamaluddin, Sirt University, Sirt, Libya
Mr. Mohammad Zaidul Karim, Daffodil International, Bangladesh
Mr. M. Vijayakumar, KSR College of Engineering, Tiruchengode, India
Mr. S. A. Ahsan Rajon, Khulna University, Bangladesh
Dr. Muhammad Mohsin Nazir, LCW University Lahore, Pakistan
Mr. Mohammad Asadul Hoque, University of Alabama, USA
Mr. P.V.Sarathchand, Indur Institute of Engineering and Technology, India
Mr. Durgesh Samadhiya, Chung Hua University, Taiwan
Dr Venu Kuthadi, University of Johannesburg, Johannesburg, RSA
Dr. (Er) Jasvir Singh, Guru Nanak Dev University, Amritsar, Punjab, India
Mr. Jasmin Cosic, Min. of the Interior of Una-sana canton, B&H, Bosnia and Herzegovina
Dr S. Rajalakshmi, Botho College, South Africa
Dr. Mohamed Sarrab, De Montfort University, UK
Mr. Basappa B. Kodada, Canara Engineering College, India
Assist. Prof. K. Ramana, Annamacharya Institute of Technology and Sciences, India
Dr. Ashu Gupta, Apeejay Institute of Management, Jalandhar, India
Assist. Prof. Shaik Rasool, Shadan College of Engineering & Technology, India
Assist. Prof. K. Suresh, Annamacharya Institute of Tech & Sci. Rajampet, AP, India
Dr . G. Singaravel, K.S.R. College of Engineering, India
Dr B. G. Geetha, K.S.R. College of Engineering, India
Assist. Prof. Kavita Choudhary, ITM University, Gurgaon
Dr. Mehrdad Jalali, Azad University, Mashhad, Iran
Megha Goel, Shamli Institute of Engineering and Technology, Shamli, India
Mr. Chi-Hua Chen, Institute of Information Management, National Chiao-Tung University, Taiwan (R.O.C.)
Assoc. Prof. A. Rajendran, RVS College of Engineering and Technology, India
Assist. Prof. S. Jaganathan, RVS College of Engineering and Technology, India
Assoc. Prof. (Dr.) A S N Chakravarthy, JNTUK University College of Engineering Vizianagaram (State University)
Assist. Prof. Deepshikha Patel, Technocrat Institute of Technology, India
Assist. Prof. Maram Balajee, GMRIT, India
Assist. Prof. Monika Bhatnagar, TIT, India
Prof. Gaurang Panchal, Charotar University of Science & Technology, India
Prof. Anand K. Tripathi, Computer Society of India
Prof. Jyoti Chaudhary, High Performance Computing Research Lab, India
Assist. Prof. Supriya Raheja, ITM University, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dr. Pankaj Gupta, Microsoft Corporation, U.S.A.


Assist. Prof. Panchamukesh Chandaka, Hyderabad Institute of Tech. & Management, India
Prof. Mohan H.S, SJB Institute Of Technology, India
Mr. Hossein Malekinezhad, Islamic Azad University, Iran
Mr. Zatin Gupta, Universti Malaysia, Malaysia
Assist. Prof. Amit Chauhan, Phonics Group of Institutions, India
Assist. Prof. Ajal A. J., METS School Of Engineering, India
Mrs. Omowunmi Omobola Adeyemo, University of Ibadan, Nigeria
Dr. Bharat Bhushan Agarwal, I.F.T.M. University, India
Md. Nazrul Islam, University of Western Ontario, Canada
Tushar Kanti, L.N.C.T, Bhopal, India
Er. Aumreesh Kumar Saxena, SIRTs College Bhopal, India
Mr. Mohammad Monirul Islam, Daffodil International University, Bangladesh
Dr. Kashif Nisar, University Utara Malaysia, Malaysia
Dr. Wei Zheng, Rutgers Univ/ A10 Networks, USA
Associate Prof. Rituraj Jain, Vyas Institute of Engg & Tech, Jodhpur – Rajasthan
Assist. Prof. Apoorvi Sood, I.T.M. University, India
Dr. Kayhan Zrar Ghafoor, University Technology Malaysia, Malaysia
Mr. Swapnil Soner, Truba Institute College of Engineering & Technology, Indore, India
Ms. Yogita Gigras, I.T.M. University, India
Associate Prof. Neelima Sadineni, Pydha Engineering College, India Pydha Engineering College
Assist. Prof. K. Deepika Rani, HITAM, Hyderabad
Ms. Shikha Maheshwari, Jaipur Engineering College & Research Centre, India
Prof. Dr V S Giridhar Akula, Avanthi's Scientific Tech. & Research Academy, Hyderabad
Prof. Dr.S.Saravanan, Muthayammal Engineering College, India
Mr. Mehdi Golsorkhatabar Amiri, Islamic Azad University, Iran
Prof. Amit Sadanand Savyanavar, MITCOE, Pune, India
Assist. Prof. P.Oliver Jayaprakash, Anna University,Chennai
Assist. Prof. Ms. Sujata, ITM University, Gurgaon, India
Dr. Asoke Nath, St. Xavier's College, India
Mr. Masoud Rafighi, Islamic Azad University, Iran
Assist. Prof. RamBabu Pemula, NIMRA College of Engineering & Technology, India
Assist. Prof. Ms Rita Chhikara, ITM University, Gurgaon, India
Mr. Sandeep Maan, Government Post Graduate College, India
Prof. Dr. S. Muralidharan, Mepco Schlenk Engineering College, India
Associate Prof. T.V.Sai Krishna, QIS College of Engineering and Technology, India
Mr. R. Balu, Bharathiar University, Coimbatore, India
Assist. Prof. Shekhar. R, Dr.SM College of Engineering, India
Prof. P. Senthilkumar, Vivekanandha Institue of Engineering and Techology for Woman, India
Mr. M. Kamarajan, PSNA College of Engineering & Technology, India
Dr. Angajala Srinivasa Rao, Jawaharlal Nehru Technical University, India
Assist. Prof. C. Venkatesh, A.I.T.S, Rajampet, India
Mr. Afshin Rezakhani Roozbahani, Ayatollah Boroujerdi University, Iran
Mr. Laxmi chand, SCTL, Noida, India
Dr. Dr. Abdul Hannan, Vivekanand College, Aurangabad
Prof. Mahesh Panchal, KITRC, Gujarat
Dr. A. Subramani, K.S.R. College of Engineering, Tiruchengode

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assist. Prof. Prakash M, Rajalakshmi Engineering College, Chennai, India


Assist. Prof. Akhilesh K Sharma, Sir Padampat Singhania University, India
Ms. Varsha Sahni, Guru Nanak Dev Engineering College, Ludhiana, India
Associate Prof. Trilochan Rout, NM Institute of Engineering and Technlogy, India
Mr. Srikanta Kumar Mohapatra, NMIET, Orissa, India
Mr. Waqas Haider Bangyal, Iqra University Islamabad, Pakistan
Dr. S. Vijayaragavan, Christ College of Engineering and Technology, Pondicherry, India
Prof. Elboukhari Mohamed, University Mohammed First, Oujda, Morocco
Dr. Muhammad Asif Khan, King Faisal University, Saudi Arabia
Dr. Nagy Ramadan Darwish Omran, Cairo University, Egypt.
Assistant Prof. Anand Nayyar, KCL Institute of Management and Technology, India
Mr. G. Premsankar, Ericcson, India
Assist. Prof. T. Hemalatha, VELS University, India
Prof. Tejaswini Apte, University of Pune, India
Dr. Edmund Ng Giap Weng, Universiti Malaysia Sarawak, Malaysia
Mr. Mahdi Nouri, Iran University of Science and Technology, Iran
Associate Prof. S. Asif Hussain, Annamacharya Institute of technology & Sciences, India
Mrs. Kavita Pabreja, Maharaja Surajmal Institute (an affiliate of GGSIP University), India
Mr. Vorugunti Chandra Sekhar, DA-IICT, India
Mr. Muhammad Najmi Ahmad Zabidi, Universiti Teknologi Malaysia, Malaysia
Dr. Aderemi A. Atayero, Covenant University, Nigeria
Assist. Prof. Osama Sohaib, Balochistan University of Information Technology, Pakistan
Assist. Prof. K. Suresh, Annamacharya Institute of Technology and Sciences, India
Mr. Hassen Mohammed Abduallah Alsafi, International Islamic University Malaysia (IIUM) Malaysia
Mr. Robail Yasrab, Virtual University of Pakistan, Pakistan
Mr. R. Balu, Bharathiar University, Coimbatore, India
Prof. Anand Nayyar, KCL Institute of Management and Technology, Jalandhar
Assoc. Prof. Vivek S Deshpande, MIT College of Engineering, India
Prof. K. Saravanan, Anna university Coimbatore, India
Dr. Ravendra Singh, MJP Rohilkhand University, Bareilly, India
Mr. V. Mathivanan, IBRA College of Technology, Sultanate of OMAN
Assoc. Prof. S. Asif Hussain, AITS, India
Assist. Prof. C. Venkatesh, AITS, India
Mr. Sami Ulhaq, SZABIST Islamabad, Pakistan
Dr. B. Justus Rabi, Institute of Science & Technology, India
Mr. Anuj Kumar Yadav, Dehradun Institute of technology, India
Mr. Alejandro Mosquera, University of Alicante, Spain
Assist. Prof. Arjun Singh, Sir Padampat Singhania University (SPSU), Udaipur, India
Dr. Smriti Agrawal, JB Institute of Engineering and Technology, Hyderabad
Assist. Prof. Swathi Sambangi, Visakha Institute of Engineering and Technology, India
Ms. Prabhjot Kaur, Guru Gobind Singh Indraprastha University, India
Mrs. Samaher AL-Hothali, Yanbu University College, Saudi Arabia
Prof. Rajneeshkaur Bedi, MIT College of Engineering, Pune, India
Mr. Hassen Mohammed Abduallah Alsafi, International Islamic University Malaysia (IIUM)
Dr. Wei Zhang, Amazon.com, Seattle, WA, USA
Mr. B. Santhosh Kumar, C S I College of Engineering, Tamil Nadu
Dr. K. Reji Kumar, , N S S College, Pandalam, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assoc. Prof. K. Seshadri Sastry, EIILM University, India


Mr. Kai Pan, UNC Charlotte, USA
Mr. Ruikar Sachin, SGGSIET, India
Prof. (Dr.) Vinodani Katiyar, Sri Ramswaroop Memorial University, India
Assoc. Prof., M. Giri, Sreenivasa Institute of Technology and Management Studies, India
Assoc. Prof. Labib Francis Gergis, Misr Academy for Engineering and Technology (MET), Egypt
Assist. Prof. Amanpreet Kaur, ITM University, India
Assist. Prof. Anand Singh Rajawat, Shri Vaishnav Institute of Technology & Science, Indore
Mrs. Hadeel Saleh Haj Aliwi, Universiti Sains Malaysia (USM), Malaysia
Dr. Abhay Bansal, Amity University, India
Dr. Mohammad A. Mezher, Fahad Bin Sultan University, KSA
Assist. Prof. Nidhi Arora, M.C.A. Institute, India
Prof. Dr. P. Suresh, Karpagam College of Engineering, Coimbatore, India
Dr. Kannan Balasubramanian, Mepco Schlenk Engineering College, India
Dr. S. Sankara Gomathi, Panimalar Engineering college, India
Prof. Anil kumar Suthar, Gujarat Technological University, L.C. Institute of Technology, India
Assist. Prof. R. Hubert Rajan, NOORUL ISLAM UNIVERSITY, India
Assist. Prof. Dr. Jyoti Mahajan, College of Engineering & Technology
Assist. Prof. Homam Reda El-Taj, College of Network Engineering, Saudi Arabia & Malaysia
Mr. Bijan Paul, Shahjalal University of Science & Technology, Bangladesh
Assoc. Prof. Dr. Ch V Phani Krishna, KL University, India
Dr. Vishal Bhatnagar, Ambedkar Institute of Advanced Communication Technologies & Research, India
Dr. Lamri LAOUAMER, Al Qassim University, Dept. Info. Systems & European University of Brittany, Dept. Computer
Science, UBO, Brest, France
Prof. Ashish Babanrao Sasankar, G.H.Raisoni Institute Of Information Technology, India
Prof. Pawan Kumar Goel, Shamli Institute of Engineering and Technology, India
Mr. Ram Kumar Singh, S.V Subharti University, India
Assistant Prof. Sunish Kumar O S, Amaljyothi College of Engineering, India
Dr Sanjay Bhargava, Banasthali University, India
Mr. Pankaj S. Kulkarni, AVEW's Shatabdi Institute of Technology, India
Mr. Roohollah Etemadi, Islamic Azad University, Iran
Mr. Oloruntoyin Sefiu Taiwo, Emmanuel Alayande College Of Education, Nigeria
Mr. Sumit Goyal, National Dairy Research Institute, India
Mr Jaswinder Singh Dilawari, Geeta Engineering College, India
Prof. Raghuraj Singh, Harcourt Butler Technological Institute, Kanpur
Dr. S.K. Mahendran, Anna University, Chennai, India
Dr. Amit Wason, Hindustan Institute of Technology & Management, Punjab
Dr. Ashu Gupta, Apeejay Institute of Management, India
Assist. Prof. D. Asir Antony Gnana Singh, M.I.E.T Engineering College, India
Mrs Mina Farmanbar, Eastern Mediterranean University, Famagusta, North Cyprus
Mr. Maram Balajee, GMR Institute of Technology, India
Mr. Moiz S. Ansari, Isra University, Hyderabad, Pakistan
Mr. Adebayo, Olawale Surajudeen, Federal University of Technology Minna, Nigeria
Mr. Jasvir Singh, University College Of Engg., India
Mr. Vivek Tiwari, MANIT, Bhopal, India
Assoc. Prof. R. Navaneethakrishnan, Bharathiyar College of Engineering and Technology, India
Mr. Somdip Dey, St. Xavier's College, Kolkata, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Mr. Souleymane Balla-Arabé, Xi’an University of Electronic Science and Technology, China
Mr. Mahabub Alam, Rajshahi University of Engineering and Technology, Bangladesh
Mr. Sathyapraksh P., S.K.P Engineering College, India
Dr. N. Karthikeyan, SNS College of Engineering, Anna University, India
Dr. Binod Kumar, JSPM's, Jayawant Technical Campus, Pune, India
Assoc. Prof. Dinesh Goyal, Suresh Gyan Vihar University, India
Mr. Md. Abdul Ahad, K L University, India
Mr. Vikas Bajpai, The LNM IIT, India
Dr. Manish Kumar Anand, Salesforce (R & D Analytics), San Francisco, USA
Assist. Prof. Dheeraj Murari, Kumaon Engineering College, India
Assoc. Prof. Dr. A. Muthukumaravel, VELS University, Chennai
Mr. A. Siles Balasingh, St.Joseph University in Tanzania, Tanzania
Mr. Ravindra Daga Badgujar, R C Patel Institute of Technology, India
Dr. Preeti Khanna, SVKM’s NMIMS, School of Business Management, India
Mr. Kumar Dayanand, Cambridge Institute of Technology, India
Dr. Syed Asif Ali, SMI University Karachi, Pakistan
Prof. Pallvi Pandit, Himachal Pradeh University, India
Mr. Ricardo Verschueren, University of Gloucestershire, UK
Assist. Prof. Mamta Juneja, University Institute of Engineering and Technology, Panjab University, India
Assoc. Prof. P. Surendra Varma, NRI Institute of Technology, JNTU Kakinada, India
Assist. Prof. Gaurav Shrivastava, RGPV / SVITS Indore, India
Dr. S. Sumathi, Anna University, India
Assist. Prof. Ankita M. Kapadia, Charotar University of Science and Technology, India
Mr. Deepak Kumar, Indian Institute of Technology (BHU), India
Dr. Dr. Rajan Gupta, GGSIP University, New Delhi, India
Assist. Prof M. Anand Kumar, Karpagam University, Coimbatore, India
Mr. Mr Arshad Mansoor, Pakistan Aeronautical Complex
Mr. Kapil Kumar Gupta, Ansal Institute of Technology and Management, India
Dr. Neeraj Tomer, SINE International Institute of Technology, Jaipur, India
Assist. Prof. Trunal J. Patel, C.G.Patel Institute of Technology, Uka Tarsadia University, Bardoli, Surat
Mr. Sivakumar, Codework solutions, India
Mr. Mohammad Sadegh Mirzaei, PGNR Company, Iran
Dr. Gerard G. Dumancas, Oklahoma Medical Research Foundation, USA
Mr. Varadala Sridhar, Varadhaman College Engineering College, Affiliated To JNTU, Hyderabad
Assist. Prof. Manoj Dhawan, SVITS, Indore
Assoc. Prof. Chitreshh Banerjee, Suresh Gyan Vihar University, Jaipur, India
Dr. S. Santhi, SCSVMV University, India
Mr. Davood Mohammadi Souran, Ministry of Energy of Iran, Iran
Mr. Shamim Ahmed, Bangladesh University of Business and Technology, Bangladesh
Mr. Sandeep Reddivari, Mississippi State University, USA
Assoc. Prof. Ousmane Thiare, Gaston Berger University, Senegal
Dr. Hazra Imran, Athabasca University, Canada
Dr. Setu Kumar Chaturvedi, Technocrats Institute of Technology, Bhopal, India
Mr. Mohd Dilshad Ansari, Jaypee University of Information Technology, India
Ms. Jaspreet Kaur, Distance Education LPU, India
Dr. D. Nagarajan, Salalah College of Technology, Sultanate of Oman
Dr. K.V.N.R.Sai Krishna, S.V.R.M. College, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Mr. Himanshu Pareek, Center for Development of Advanced Computing (CDAC), India
Mr. Khaldi Amine, Badji Mokhtar University, Algeria
Mr. Mohammad Sadegh Mirzaei, Scientific Applied University, Iran
Assist. Prof. Khyati Chaudhary, Ram-eesh Institute of Engg. & Technology, India
Mr. Sanjay Agal, Pacific College of Engineering Udaipur, India
Mr. Abdul Mateen Ansari, King Khalid University, Saudi Arabia
Dr. H.S. Behera, Veer Surendra Sai University of Technology (VSSUT), India
Dr. Shrikant Tiwari, Shri Shankaracharya Group of Institutions (SSGI), India
Prof. Ganesh B. Regulwar, Shri Shankarprasad Agnihotri College of Engg, India
Prof. Pinnamaneni Bhanu Prasad, Matrix vision GmbH, Germany
Dr. Shrikant Tiwari, Shri Shankaracharya Technical Campus (SSTC), India
Dr. Siddesh G.K., : Dayananada Sagar College of Engineering, Bangalore, India
Dr. Nadir Bouchama, CERIST Research Center, Algeria
Dr. R. Sathishkumar, Sri Venkateswara College of Engineering, India
Assistant Prof (Dr.) Mohamed Moussaoui, Abdelmalek Essaadi University, Morocco
Dr. S. Malathi, Panimalar Engineering College, Chennai, India
Dr. V. Subedha, Panimalar Institute of Technology, Chennai, India
Dr. Prashant Panse, Swami Vivekanand College of Engineering, Indore, India
Dr. Hamza Aldabbas, Al-Balqa’a Applied University, Jordan
Dr. G. Rasitha Banu, Vel's University, Chennai
Dr. V. D. Ambeth Kumar, Panimalar Engineering College, Chennai
Prof. Anuranjan Misra, Bhagwant Institute of Technology, Ghaziabad, India
Ms. U. Sinthuja, PSG college of arts &science, India
Dr. Ehsan Saradar Torshizi, Urmia University, Iran
Dr. Shamneesh Sharma, APG Shimla University, Shimla (H.P.), India
Assistant Prof. A. S. Syed Navaz, Muthayammal College of Arts & Science, India
Assistant Prof. Ranjit Panigrahi, Sikkim Manipal Institute of Technology, Majitar, Sikkim
Dr. Khaled Eskaf, Arab Academy for Science ,Technology & Maritime Transportation, Egypt
Dr. Nishant Gupta, University of Jammu, India
Assistant Prof. Nagarajan Sankaran, Annamalai University, Chidambaram, Tamilnadu, India
Assistant Prof.Tribikram Pradhan, Manipal Institute of Technology, India
Dr. Nasser Lotfi, Eastern Mediterranean University, Northern Cyprus
Dr. R. Manavalan, K S Rangasamy college of Arts and Science, Tamilnadu, India
Assistant Prof. P. Krishna Sankar, K S Rangasamy college of Arts and Science, Tamilnadu, India
Dr. Rahul Malik, Cisco Systems, USA
Dr. S. C. Lingareddy, ALPHA College of Engineering, India
Assistant Prof. Mohammed Shuaib, Interal University, Lucknow, India
Dr. Sachin Yele, Sanghvi Institute of Management & Science, India
Dr. T. Thambidurai, Sun Univercell, Singapore
Prof. Anandkumar Telang, BKIT, India
Assistant Prof. R. Poorvadevi, SCSVMV University, India
Dr Uttam Mande, Gitam University, India
Dr. Poornima Girish Naik, Shahu Institute of Business Education and Research (SIBER), India
Prof. Md. Abu Kausar, Jaipur National University, Jaipur, India
Dr. Mohammed Zuber, AISECT University, India
Prof. Kalum Priyanath Udagepola, King Abdulaziz University, Saudi Arabia
Dr. K. R. Ananth, Velalar College of Engineering and Technology, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assistant Prof. Sanjay Sharma, Roorkee Engineering & Management Institute Shamli (U.P), India
Assistant Prof. Panem Charan Arur, Priyadarshini Institute of Technology, India
Dr. Ashwak Mahmood muhsen alabaichi, Karbala University / College of Science, Iraq
Dr. Urmila Shrawankar, G H Raisoni College of Engineering, Nagpur (MS), India
Dr. Krishan Kumar Paliwal, Panipat Institute of Engineering & Technology, India
Dr. Mukesh Negi, Tech Mahindra, India
Dr. Anuj Kumar Singh, Amity University Gurgaon, India
Dr. Babar Shah, Gyeongsang National University, South Korea
Assistant Prof. Jayprakash Upadhyay, SRI-TECH Jabalpur, India
Assistant Prof. Varadala Sridhar, Vidya Jyothi Institute of Technology, India
Assistant Prof. Parameshachari B D, KSIT, Bangalore, India
Assistant Prof. Ankit Garg, Amity University, Haryana, India
Assistant Prof. Rajashe Karappa, SDMCET, Karnataka, India
Assistant Prof. Varun Jasuja, GNIT, India
Assistant Prof. Sonal Honale, Abha Gaikwad Patil College of Engineering Nagpur, India
Dr. Pooja Choudhary, CT Group of Institutions, NIT Jalandhar, India
Dr. Faouzi Hidoussi, UHL Batna, Algeria
Dr. Naseer Ali Husieen, Wasit University, Iraq
Assistant Prof. Vinod Kumar Shukla, Amity University, Dubai
Dr. Ahmed Farouk Metwaly, K L University
Mr. Mohammed Noaman Murad, Cihan University, Iraq
Dr. Suxing Liu, Arkansas State University, USA
Dr. M. Gomathi, Velalar College of Engineering and Technology, India
Assistant Prof. Sumardiono, College PGRI Blitar, Indonesia
Dr. Latika Kharb, Jagan Institute of Management Studies (JIMS), Delhi, India
Associate Prof. S. Raja, Pauls College of Engineering and Technology, Tamilnadu, India
Assistant Prof. Seyed Reza Pakize, Shahid Sani High School, Iran
Dr. Thiyagu Nagaraj, University-INOU, India
Assistant Prof. Noreen Sarai, Harare Institute of Technology, Zimbabwe
Assistant Prof. Gajanand Sharma, Suresh Gyan Vihar University Jaipur, Rajasthan, India
Assistant Prof. Mapari Vikas Prakash, Siddhant COE, Sudumbare, Pune, India
Dr. Devesh Katiyar, Shri Ramswaroop Memorial University, India
Dr. Shenshen Liang, University of California, Santa Cruz, US
Assistant Prof. Mohammad Abu Omar, Limkokwing University of Creative Technology- Malaysia
Mr. Snehasis Banerjee, Tata Consultancy Services, India
Assistant Prof. Kibona Lusekelo, Ruaha Catholic University (RUCU), Tanzania
Assistant Prof. Adib Kabir Chowdhury, University College Technology Sarawak, Malaysia
Dr. Ying Yang, Computer Science Department, Yale University, USA
Dr. Vinay Shukla, Institute Of Technology & Management, India
Dr. Liviu Octavian Mafteiu-Scai, West University of Timisoara, Romania
Assistant Prof. Rana Khudhair Abbas Ahmed, Al-Rafidain University College, Iraq
Assistant Prof. Nitin A. Naik, S.R.T.M. University, India
Dr. Timothy Powers, University of Hertfordshire, UK
Dr. S. Prasath, Bharathiar University, Erode, India
Dr. Ritu Shrivastava, SIRTS Bhopal, India
Prof. Rohit Shrivastava, Mittal Institute of Technology, Bhopal, India
Dr. Gianina Mihai, Dunarea de Jos" University of Galati, Romania

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Assistant Prof. Ms. T. Kalai Selvi, Erode Sengunthar Engineering College, India
Assistant Prof. Ms. C. Kavitha, Erode Sengunthar Engineering College, India
Assistant Prof. K. Sinivasamoorthi, Erode Sengunthar Engineering College, India
Assistant Prof. Mallikarjun C Sarsamba Bheemnna Khandre Institute Technology, Bhalki, India
Assistant Prof. Vishwanath Chikaraddi, Veermata Jijabai technological Institute (Central Technological Institute), India
Assistant Prof. Dr. Ikvinderpal Singh, Trai Shatabdi GGS Khalsa College, India
Assistant Prof. Mohammed Noaman Murad, Cihan University, Iraq
Professor Yousef Farhaoui, Moulay Ismail University, Errachidia, Morocco
Dr. Parul Verma, Amity University, India
Professor Yousef Farhaoui, Moulay Ismail University, Errachidia, Morocco
Assistant Prof. Madhavi Dhingra, Amity University, Madhya Pradesh, India
Assistant Prof.. G. Selvavinayagam, SNS College of Technology, Coimbatore, India
Assistant Prof. Madhavi Dhingra, Amity University, MP, India
Professor Kartheesan Log, Anna University, Chennai
Professor Vasudeva Acharya, Shri Madhwa vadiraja Institute of Technology, India
Dr. Asif Iqbal Hajamydeen, Management & Science University, Malaysia
Assistant Prof., Mahendra Singh Meena, Amity University Haryana
Assistant Professor Manjeet Kaur, Amity University Haryana
Dr. Mohamed Abd El-Basset Matwalli, Zagazig University, Egypt
Dr. Ramani Kannan, Universiti Teknologi PETRONAS, Malaysia
Assistant Prof. S. Jagadeesan Subramaniam, Anna University, India
Assistant Prof. Dharmendra Choudhary, Tripura University, India
Assistant Prof. Deepika Vodnala, SR Engineering College, India
Dr. Kai Cong, Intel Corporation & Computer Science Department, Portland State University, USA
Dr. Kailas R Patil, Vishwakarma Institute of Information Technology (VIIT), India
Dr. Omar A. Alzubi, Faculty of IT / Al-Balqa Applied University, Jordan
Assistant Prof. Kareemullah Shaik, Nimra Institute of Science and Technology, India
Assistant Prof. Chirag Modi, NIT Goa
Dr. R. Ramkumar, Nandha Arts And Science College, India
Dr. Priyadharshini Vydhialingam, Harathiar University, India
Dr. P. S. Jagadeesh Kumar, DBIT, Bangalore, Karnataka
Dr. Vikas Thada, AMITY University, Pachgaon
Dr. T. A. Ashok Kumar, Institute of Management, Christ University, Bangalore
Dr. Shaheera Rashwan, Informatics Research Institute
Dr. S. Preetha Gunasekar, Bharathiyar University, India
Asst Professor Sameer Dev Sharma, Uttaranchal University, Dehradun
Dr. Zhihan lv, Chinese Academy of Science, China
Dr. Ikvinderpal Singh, Trai Shatabdi GGS Khalsa College, Amritsar
Dr. Umar Ruhi, University of Ottawa, Canada
Dr. Jasmin Cosic, University of Bihac, Bosnia and Herzegovina
Dr. Homam Reda El-Taj, University of Tabuk, Kingdom of Saudi Arabia
Dr. Mostafa Ghobaei Arani, Islamic Azad University, Iran
Dr. Ayyasamy Ayyanar, Annamalai University, India
Dr. Selvakumar Manickam, Universiti Sains Malaysia, Malaysia
Dr. Murali Krishna Namana, GITAM University, India
Dr. Smriti Agrawal, Chaitanya Bharathi Institute of Technology, Hyderabad, India
Professor Vimalathithan Rathinasabapathy, Karpagam College Of Engineering, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dr. Sushil Chandra Dimri, Graphic Era University, India


Dr. Dinh-Sinh Mai, Le Quy Don Technical University, Vietnam
Dr. S. Rama Sree, Aditya Engg. College, India
Dr. Ehab T. Alnfrawy, Sadat Academy, Egypt
Dr. Patrick D. Cerna, Haramaya University, Ethiopia
Dr. Vishal Jain, Bharati Vidyapeeth's Institute of Computer Applications and Management (BVICAM), India
Associate Prof. Dr. Jiliang Zhang, North Eastern University, China
Dr. Sharefa Murad, Middle East University, Jordan
Dr. Ajeet Singh Poonia, Govt. College of Engineering & technology, Rajasthan, India
Dr. Vahid Esmaeelzadeh, University of Science and Technology, Iran
Dr. Jacek M. Czerniak, Casimir the Great University in Bydgoszcz, Institute of Technology, Poland
Associate Prof. Anisur Rehman Nasir, Jamia Millia Islamia University
Assistant Prof. Imran Ahmad, COMSATS Institute of Information Technology, Pakistan
Professor Ghulam Qasim, Preston University, Islamabad, Pakistan
Dr. Parameshachari B D, GSSS Institute of Engineering and Technology for Women
Dr. Wencan Luo, University of Pittsburgh, US
Dr. Musa PEKER, Faculty of Technology, Mugla Sitki Kocman University, Turkey
Dr. Gunasekaran Shanmugam, Anna University, India
Dr. Binh P. Nguyen, National University of Singapore, Singapore
Dr. Rajkumar Jain, Indian Institute of Technology Indore, India
Dr. Imtiaz Ali Halepoto, QUEST Nawabshah, Pakistan
Dr. Shaligram Prajapat, Devi Ahilya University Indore India
Dr. Sunita Singhal, Birla Institute of Technologyand Science, Pilani, India
Dr. Ijaz Ali Shoukat, King Saud University, Saudi Arabia
Dr. Anuj Gupta, IKG Punjab Technical University, India
Dr. Sonali Saini, IES-IPS Academy, India
Dr. Krishan Kumar, MotiLal Nehru National Institute of Technology, Allahabad, India
Dr. Z. Faizal Khan, College of Engineering, Shaqra University, Kingdom of Saudi Arabia
Prof. M. Padmavathamma, S.V. University Tirupati, India
Prof. A. Velayudham, Cape Institute of Technology, India
Prof. Seifeidne Kadry, American University of the Middle East
Dr. J. Durga Prasad Rao, Pt. Ravishankar Shukla University, Raipur
Assistant Prof. Najam Hasan, Dhofar University
Dr. G. Suseendran, Vels University, Pallavaram, Chennai
Prof. Ankit Faldu, Gujarat Technological Universiry- Atmiya Institute of Technology and Science
Dr. Ali Habiboghli, Islamic Azad University
Dr. Deepak Dembla, JECRC University, Jaipur, India
Dr. Pankaj Rajan, Walmart Labs, USA
Assistant Prof. Radoslava Kraleva, South-West University "Neofit Rilski", Bulgaria
Assistant Prof. Medhavi Shriwas, Shri vaishnav institute of Technology, India
Associate Prof. Sedat Akleylek, Ondokuz Mayis University, Turkey
Dr. U.V. Arivazhagu, Kingston Engineering College Affiliated To Anna University, India
Dr. Touseef Ali, University of Engineering and Technology, Taxila, Pakistan
Assistant Prof. Naren Jeeva, SASTRA University, India
Dr. Riccardo Colella, University of Salento, Italy
Dr. Enache Maria Cristina, University of Galati, Romania
Dr. Senthil P, Kurinji College of Arts & Science, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
Vol. 14 CIC 2016 Special Issue International Journal of Computer Science and Information Security (IJCSIS)
https://sites.google.com/site/ijcsis/
ISSN 1947-5500

Dr. Hasan Ashrafi-rizi, Isfahan University of Medical Sciences, Isfahan, Iran


Dr. Mazhar Malik, Institute of Southern Punjab, Pakistan
Dr. Yajie Miao, Carnegie Mellon University, USA
Dr. Kamran Shaukat, University of the Punjab, Pakistan
Dr. Sasikaladevi N., SASTRA University, India
Dr. Ali Asghar Rahmani Hosseinabadi, Islamic Azad University Ayatollah Amoli Branch, Amol, Iran
Dr. Velin Kralev, South-West University "Neofit Rilski", Blagoevgrad, Bulgaria
Dr. Marius Iulian Mihailescu, LUMINA - The University of South-East Europe
Dr. Sriramula Nagaprasad, S.R.R.Govt.Arts & Science College, Karimnagar, India
Prof (Dr.) Namrata Dhanda, Dr. APJ Abdul Kalam Technical University, Lucknow, India
Dr. Javed Ahmed Mahar, Shah Abdul Latif University, Khairpur Mir’s, Pakistan
Dr. B. Narendra Kumar Rao, Sree Vidyanikethan Engineering College, India
Dr. Shahzad Anwar, University of Engineering & Technology Peshawar, Pakistan
Dr. Basit Shahzad, King Saud University, Riyadh - Saudi Arabia
Dr. Nilamadhab Mishra, Chang Gung University
Dr. Sachin Kumar, Indian Institute of Technology Roorkee
Dr. Santosh Nanda, Biju-Pattnaik University of Technology
Dr. Sherzod Turaev, International Islamic University Malaysia
Dr. Yilun Shang, Tongji University, Department of Mathematics, Shanghai, China
Dr. Nuzhat Shaikh, Modern Education society's College of Engineering, Pune, India
Dr. Parul Verma, Amity University, Lucknow campus, India
Dr. Rachid Alaoui, Agadir Ibn Zohr University, Agadir, Morocco
Dr. Dharmendra Patel, Charotar University of Science and Technology, India
Dr. Dong Zhang, University of Central Florida, USA
Dr. Kennedy Chinedu Okafor, Federal University of Technology Owerri, Nigeria
Prof. C Ram Kumar, Dr NGP Institute of Technology, India
Dr. Sandeep Gupta, GGS IP University, New Delhi, India
Dr. Shahanawaj Ahamad, University of Ha'il, Ha'il City, Ministry of Higher Education, Kingdom of Saudi Arabia
Dr. Najeed Ahmed Khan, NED University of Engineering & Technology, India
Dr. Sajid Ullah Khan, Universiti Malaysia Sarawak, Malaysia
Dr. Muhammad Asif, National Textile University Faisalabad, Pakistan
Dr. Yu BI, University of Central Florida, Orlando, FL, USA
Dr. Brijendra Kumar Joshi, Research Center, Military College of Telecommunication Engineering, India

International Conference on Advances in Computational Intelligence and Communication (CIC 2016)


Pondicherry Engineering College, Puducherry, India October 19 & 20 - 2016
CALL FOR PAPERS
International Journal of Computer Science and Information Security

IJCSIS 2016
ISSN: 1947-5500
http://sites.google.com/site/ijcsis/
International Journal Computer Science and Information Security, IJCSIS, is the premier
scholarly venue in the areas of computer science and security issues. IJCSIS 2011 will provide a high
profile, leading edge platform for researchers and engineers alike to publish state-of-the-art research in the
respective fields of information technology and communication security. The journal will feature a diverse
mixture of publication articles including core and applied computer science related topics.

Authors are solicited to contribute to the special issue by submitting articles that illustrate research results,
projects, surveying works and industrial experiences that describe significant advances in the following
areas, but are not limited to. Submissions may span a broad range of topics, e.g.:

Track A: Security

Access control, Anonymity, Audit and audit reduction & Authentication and authorization, Applied
cryptography, Cryptanalysis, Digital Signatures, Biometric security, Boundary control devices,
Certification and accreditation, Cross-layer design for security, Security & Network Management, Data and
system integrity, Database security, Defensive information warfare, Denial of service protection, Intrusion
Detection, Anti-malware, Distributed systems security, Electronic commerce, E-mail security, Spam,
Phishing, E-mail fraud, Virus, worms, Trojan Protection, Grid security, Information hiding and
watermarking & Information survivability, Insider threat protection, Integrity
Intellectual property protection, Internet/Intranet Security, Key management and key recovery, Language-
based security, Mobile and wireless security, Mobile, Ad Hoc and Sensor Network Security, Monitoring
and surveillance, Multimedia security ,Operating system security, Peer-to-peer security, Performance
Evaluations of Protocols & Security Application, Privacy and data protection, Product evaluation criteria
and compliance, Risk evaluation and security certification, Risk/vulnerability assessment, Security &
Network Management, Security Models & protocols, Security threats & countermeasures (DDoS, MiM,
Session Hijacking, Replay attack etc,), Trusted computing, Ubiquitous Computing Security, Virtualization
security, VoIP security, Web 2.0 security, Submission Procedures, Active Defense Systems, Adaptive
Defense Systems, Benchmark, Analysis and Evaluation of Security Systems, Distributed Access Control
and Trust Management, Distributed Attack Systems and Mechanisms, Distributed Intrusion
Detection/Prevention Systems, Denial-of-Service Attacks and Countermeasures, High Performance
Security Systems, Identity Management and Authentication, Implementation, Deployment and
Management of Security Systems, Intelligent Defense Systems, Internet and Network Forensics, Large-
scale Attacks and Defense, RFID Security and Privacy, Security Architectures in Distributed Network
Systems, Security for Critical Infrastructures, Security for P2P systems and Grid Systems, Security in E-
Commerce, Security and Privacy in Wireless Networks, Secure Mobile Agents and Mobile Code, Security
Protocols, Security Simulation and Tools, Security Theory and Tools, Standards and Assurance Methods,
Trusted Computing, Viruses, Worms, and Other Malicious Code, World Wide Web Security, Novel and
emerging secure architecture, Study of attack strategies, attack modeling, Case studies and analysis of
actual attacks, Continuity of Operations during an attack, Key management, Trust management, Intrusion
detection techniques, Intrusion response, alarm management, and correlation analysis, Study of tradeoffs
between security and system performance, Intrusion tolerance systems, Secure protocols, Security in
wireless networks (e.g. mesh networks, sensor networks, etc.), Cryptography and Secure Communications,
Computer Forensics, Recovery and Healing, Security Visualization, Formal Methods in Security, Principles
for Designing a Secure Computing System, Autonomic Security, Internet Security, Security in Health Care
Systems, Security Solutions Using Reconfigurable Computing, Adaptive and Intelligent Defense Systems,
Authentication and Access control, Denial of service attacks and countermeasures, Identity, Route and
Location Anonymity schemes, Intrusion detection and prevention techniques, Cryptography, encryption
algorithms and Key management schemes, Secure routing schemes, Secure neighbor discovery and
localization, Trust establishment and maintenance, Confidentiality and data integrity, Security architectures,
deployments and solutions, Emerging threats to cloud-based services, Security model for new services,
Cloud-aware web service security, Information hiding in Cloud Computing, Securing distributed data
storage in cloud, Security, privacy and trust in mobile computing systems and applications, Middleware
security & Security features: middleware software is an asset on
its own and has to be protected, interaction between security-specific and other middleware features, e.g.,
context-awareness, Middleware-level security monitoring and measurement: metrics and mechanisms
for quantification and evaluation of security enforced by the middleware, Security co-design: trade-off and
co-design between application-based and middleware-based security, Policy-based management:
innovative support for policy-based definition and enforcement of security concerns, Identification and
authentication mechanisms: Means to capture application specific constraints in defining and enforcing
access control rules, Middleware-oriented security patterns: identification of patterns for sound, reusable
security, Security in aspect-based middleware: mechanisms for isolating and enforcing security aspects,
Security in agent-based platforms: protection for mobile code and platforms, Smart Devices: Biometrics,
National ID cards, Embedded Systems Security and TPMs, RFID Systems Security, Smart Card Security,
Pervasive Systems: Digital Rights Management (DRM) in pervasive environments, Intrusion Detection and
Information Filtering, Localization Systems Security (Tracking of People and Goods), Mobile Commerce
Security, Privacy Enhancing Technologies, Security Protocols (for Identification and Authentication,
Confidentiality and Privacy, and Integrity), Ubiquitous Networks: Ad Hoc Networks Security, Delay-
Tolerant Network Security, Domestic Network Security, Peer-to-Peer Networks Security, Security Issues
in Mobile and Ubiquitous Networks, Security of GSM/GPRS/UMTS Systems, Sensor Networks Security,
Vehicular Network Security, Wireless Communication Security: Bluetooth, NFC, WiFi, WiMAX,
WiMedia, others

This Track will emphasize the design, implementation, management and applications of computer
communications, networks and services. Topics of mostly theoretical nature are also welcome, provided
there is clear practical potential in applying the results of such work.

Track B: Computer Science

Broadband wireless technologies: LTE, WiMAX, WiRAN, HSDPA, HSUPA, Resource allocation and
interference management, Quality of service and scheduling methods, Capacity planning and dimensioning,
Cross-layer design and Physical layer based issue, Interworking architecture and interoperability, Relay
assisted and cooperative communications, Location and provisioning and mobility management, Call
admission and flow/congestion control, Performance optimization, Channel capacity modeling and analysis,
Middleware Issues: Event-based, publish/subscribe, and message-oriented middleware, Reconfigurable,
adaptable, and reflective middleware approaches, Middleware solutions for reliability, fault tolerance, and
quality-of-service, Scalability of middleware, Context-aware middleware, Autonomic and self-managing
middleware, Evaluation techniques for middleware solutions, Formal methods and tools for designing,
verifying, and evaluating, middleware, Software engineering techniques for middleware, Service oriented
middleware, Agent-based middleware, Security middleware, Network Applications: Network-based
automation, Cloud applications, Ubiquitous and pervasive applications, Collaborative applications, RFID
and sensor network applications, Mobile applications, Smart home applications, Infrastructure monitoring
and control applications, Remote health monitoring, GPS and location-based applications, Networked
vehicles applications, Alert applications, Embeded Computer System, Advanced Control Systems, and
Intelligent Control : Advanced control and measurement, computer and microprocessor-based control,
signal processing, estimation and identification techniques, application specific IC’s, nonlinear and
adaptive control, optimal and robot control, intelligent control, evolutionary computing, and intelligent
systems, instrumentation subject to critical conditions, automotive, marine and aero-space control and all
other control applications, Intelligent Control System, Wiring/Wireless Sensor, Signal Control System.
Sensors, Actuators and Systems Integration : Intelligent sensors and actuators, multisensor fusion, sensor
array and multi-channel processing, micro/nano technology, microsensors and microactuators,
instrumentation electronics, MEMS and system integration, wireless sensor, Network Sensor, Hybrid
Sensor, Distributed Sensor Networks. Signal and Image Processing : Digital signal processing theory,
methods, DSP implementation, speech processing, image and multidimensional signal processing, Image
analysis and processing, Image and Multimedia applications, Real-time multimedia signal processing,
Computer vision, Emerging signal processing areas, Remote Sensing, Signal processing in education.
Industrial Informatics: Industrial applications of neural networks, fuzzy algorithms, Neuro-Fuzzy
application, bioInformatics, real-time computer control, real-time information systems, human-machine
interfaces, CAD/CAM/CAT/CIM, virtual reality, industrial communications, flexible manufacturing
systems, industrial automated process, Data Storage Management, Harddisk control, Supply Chain
Management, Logistics applications, Power plant automation, Drives automation. Information Technology,
Management of Information System : Management information systems, Information Management,
Nursing information management, Information System, Information Technology and their application, Data
retrieval, Data Base Management, Decision analysis methods, Information processing, Operations research,
E-Business, E-Commerce, E-Government, Computer Business, Security and risk management, Medical
imaging, Biotechnology, Bio-Medicine, Computer-based information systems in health care, Changing
Access to Patient Information, Healthcare Management Information Technology.
Communication/Computer Network, Transportation Application : On-board diagnostics, Active safety
systems, Communication systems, Wireless technology, Communication application, Navigation and
Guidance, Vision-based applications, Speech interface, Sensor fusion, Networking theory and technologies,
Transportation information, Autonomous vehicle, Vehicle application of affective computing, Advance
Computing technology and their application : Broadband and intelligent networks, Data Mining, Data
fusion, Computational intelligence, Information and data security, Information indexing and retrieval,
Information processing, Information systems and applications, Internet applications and performances,
Knowledge based systems, Knowledge management, Software Engineering, Decision making, Mobile
networks and services, Network management and services, Neural Network, Fuzzy logics, Neuro-Fuzzy,
Expert approaches, Innovation Technology and Management : Innovation and product development,
Emerging advances in business and its applications, Creativity in Internet management and retailing, B2B
and B2C management, Electronic transceiver device for Retail Marketing Industries, Facilities planning
and management, Innovative pervasive computing applications, Programming paradigms for pervasive
systems, Software evolution and maintenance in pervasive systems, Middleware services and agent
technologies, Adaptive, autonomic and context-aware computing, Mobile/Wireless computing systems and
services in pervasive computing, Energy-efficient and green pervasive computing, Communication
architectures for pervasive computing, Ad hoc networks for pervasive communications, Pervasive
opportunistic communications and applications, Enabling technologies for pervasive systems (e.g., wireless
BAN, PAN), Positioning and tracking technologies, Sensors and RFID in pervasive systems, Multimodal
sensing and context for pervasive applications, Pervasive sensing, perception and semantic interpretation,
Smart devices and intelligent environments, Trust, security and privacy issues in pervasive systems, User
interfaces and interaction models, Virtual immersive communications, Wearable computers, Standards and
interfaces for pervasive computing environments, Social and economic models for pervasive systems,
Active and Programmable Networks, Ad Hoc & Sensor Network, Congestion and/or Flow Control, Content
Distribution, Grid Networking, High-speed Network Architectures, Internet Services and Applications,
Optical Networks, Mobile and Wireless Networks, Network Modeling and Simulation, Multicast,
Multimedia Communications, Network Control and Management, Network Protocols, Network
Performance, Network Measurement, Peer to Peer and Overlay Networks, Quality of Service and Quality
of Experience, Ubiquitous Networks, Crosscutting Themes – Internet Technologies, Infrastructure,
Services and Applications; Open Source Tools, Open Models and Architectures; Security, Privacy and
Trust; Navigation Systems, Location Based Services; Social Networks and Online Communities; ICT
Convergence, Digital Economy and Digital Divide, Neural Networks, Pattern Recognition, Computer
Vision, Advanced Computing Architectures and New Programming Models, Visualization and Virtual
Reality as Applied to Computational Science, Computer Architecture and Embedded Systems, Technology
in Education, Theoretical Computer Science, Computing Ethics, Computing Practices & Applications

Authors are invited to submit papers through e-mail ijcsiseditor@gmail.com. Submissions must be original
and should not have been published previously or be under consideration for publication while being
evaluated by IJCSIS. Before submission authors should carefully read over the journal's Author Guidelines,
which are located at http://sites.google.com/site/ijcsis/authors-notes .
© IJCSIS PUBLICATION 2016
ISSN 1947 5500
http://sites.google.com/site/ijcsis/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy