0% found this document useful (0 votes)
32 views272 pages

Nanotechnology 2 Characterization and Applications

Uploaded by

l47087789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views272 pages

Nanotechnology 2 Characterization and Applications

Uploaded by

l47087789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 272

NANOTECHNOLOGY 2

EDITORS
Prof Dr. Mustafa ERSÖZ
Dr. Mine SULAK
Dr. Massimo BERSANI
Dr. Arzum IŞITAN
Meltem BALABAN
Dr. Zeha YAKAR
Dr. Cumhur Gökhan ÜNLÜ
Dr. Volkan ONAR

Denizli 2018

~1~
NANOTECHNOLOGY 2
EDITORS
Prof Dr. Mustafa ERSÖZ
Dr. Mine SULAK
Dr. Massimo BERSANI

Dr. Arzum IŞITAN


Meltem BALABAN
Dr. Zeha YAKAR
Dr. Cumhur Gökhan ÜNLÜ
Dr. Volkan ONAR
(0258. 296 41 37 aisitan@pau.edu.tr)

ISBN 978-975-6992-78-4
1st Edition – October 2018

All rights reserved.


~2~
This book is an output of “Universal Nanotechnology Skills Crea-
tion and Motivation Development) / UNINANO” as numbered
2016-1-TR01-KA203-034520 supported by Turkish National Agency
under Erasmus+ Key Action 2 Strategic Partnership in the field of
Higher Education (KA203).

“Funded by the Erasmus+ Program of the European Union.


However, European Commission and Turkish National Agency can-
not be held responsi-ble for any use which may be made of the in-
formation contained therein”

~3~
CONTENTS

PREFACE 7
UNINANO PROJECT 8

SECTION 1 INTRODUCTION TO NANOMATERIALS 9


1.1 NANOMATERIAL CHARACTERIZATION 11
1.1.1 Atomic Force Microscopy [AFM] 13
1.1.2 Auger Electron Spectroscopy [AES] 13
1.1.3 Fourier-Transform Infrared Microscopy (FITR) 13
1.1.4 Helium Ion Microscopy [HIM] 13
1.1.5 Dynamic Secondary Ion Mass Spectrometry [SIMS] 14
1.1.6 X-ray Fluorescence Analysis [XRF, EDX] 14
1.1.7 Grazing-incidence X-ray Fluorescence 14
1.1.8 Electron Backscatter Diffraction [EBSD] 15
1.1.9 Scanning Electron Microscopy [SEM] 15
1.1.10 Scanning Tunneling Microscopy [STM] 15
1.1.11 Static Secondary Mass Ion Spectrometry [S-SIMS] 15
1.1.12 Surface Raman Spectroscopy 16
1.1.13 Transmission Electron Microscopy [TEM] 16
1.1.14 X-ray Diffraction and Reflection [XRD] 16
1.1.15 X-ray Photoelectron Spectroscopy [XPS] 16
1.1.16 X-ray Reflectometry [XRR] 17

SECTION 2 MICROSCOPY 19
2.1 SEM ANALYSIS 21
2.1.1 Instrumentation 23
2.1.2 Application Cases 26
2.2 SCANNING PROBE MICROSCOPES (SPM) 34
2.2.1 Instrumentation 35
2.3 HELIUM ION MICROSCOPY (HIM) 49
2.3.1 Principles 50
2.3.2 Instrumentation 52
2.3.3 Application Nanostructured Ge Layers 53

SECTION 3 SPECTROSCOBY VE SPECTROMETRY 59


3.1 X-RAY DIFFRACTION (XRD) 61
3.1.1 Applications 63
3.2 X-RAY FLUORESCENCE ANALYSIS 67
3.2.2 X-Ray Fluorescence Analysis 72
3.2.3 Total reflection XRF, Grazing Incidence-XRF, 75
3.2.4 Instrumentation 77
3.2.5 Application Cases 79
3.3 X-RAY PHOTOELECTRON SPECTROSCOPY (XPS) 88
~4~
3.3.1 Principle of The Technique and Instrumentation 89
3.3.2 Application Cases 94
3.4 RAMAN SPECTROSCOPY 100
3.4.1. Classical Wave Interpretation 100
3.4.2 Quantum Particle Interpretation 103
3.4.3 Instrumentation 105
3.4.5 Example of Raman Spectra Analysis 106
3.4.6 Case study 107
3.5 SECONDARY ION MASS SPECTROMETRY (SIMS) 112
3.5.1 Basic Principles 115
3.5.2 SIMS Analytical Modes 117
3.5.3 Depth profiling 119
3.5.4 Applications 122

SECTION 4 APPLICATIONS 129


4.1 INTRODUCTION to SURFACE PLASMONS AND ITS APPLI 131
4.1.1 Surface Plasmon Polaritrons 132
4.1.2 Surface Plasmons Excitation 137
4.1.3 Surface Plasmons for Chemical and Bio Sensing 138
4.1.4 Plasmonic Photodetectors 140
4.2 ELECTRONICS APPLICATIONS 144
4.2.1 Nanoelectronics 145
4.2.3 Nanoelectronics in Communication Systems 153
4.2.4 Nanoelectronics in Medicine 153
4.2.5 Research & Development Areas in Nanoelectronics 154
4.3 APPLICATIONS of NANOBIOTECHNOLOGY 157
4.3.1 Use of Nanomaterials in Diagnostic Applications 158
4.3.3 Use of Nanomaterials in Implant and Prosthesis Applications 169
4.4 TEXTILE APPLICATIONS 177
4.4.1 Smart Textiles Produced with Nanotechnology 178
4.4.2 Nano Textile Production Methods 182
4.4.3 Use of Nanotechnology during Fiber and Yarn Production 183
4.4.4 Nano Finishing Processes 184
4.5 ENVIRONMENTAL APPLICATIONS 194
4.5.1 Use of Nanoparticles 194
4.5.2 Sustainable Products 195
4.5.3 Sensor Applications 201
4.6 MILITARY APPLICATIONS 206
4.6.1 Soldier Nanotechnologies 206
4.6.2 Bio Chemical Sensing, Health Monitoring, 212
4.6.3 Tracking, Tracing and Remote Identification 213
4.7 PACKAGING APPLICATIONS 218
4.7.1 Packaging 218
4.7.2 Nanotechnology and Packaging 219
~5~
4.7.3 Nanotechnology Packaging Design Strategy 221
4.7.4 Packages of the future 223
4.7.5 Appplication of Nano-Materials in Packaging 228

SECTION 5 INTERNATIONALNORMS and REGULATIONS 233


5.1 INTERNATIONAL NORMS AND REGULATIONS 235
5.1.2 What are the regulations for nanotechnologies? 236
5.1.3 ISO/TC 229 on Nanotechnologies 237
5.1.4 ISO/TC 229 on Nanotechnologies Objectives 238
5.1.6 Nanotechnology Norms Needs Issues 242

SECTION 6 NANOTECHNOLOGY and INNOVATION 249


6.1 INNOVATION in NANOTECHNOLOGY 251
QUESTIONS 261

~6~
PREFACE

Nanotechnology, which is the fundamental technology of the industrial revolu-


tion of 21st century, is the science of controlling matter at atomic and molecular
levels. At its simplest meaning and depending on scientific determinations and
experiences, as a consequence of its contribution to environment, energy, mate-
rials strength and proper consumption, the share of nanotechnology in preser-
ving the world’s livability is very clear.
Today, the high value-added technology is vital for business lines that require
intense competition such as military, medical, automotive, textile applications.
In recent years, nanotechnological investigations have brought a significant
progress in especially materials science and many new products or process ta-
king place in our lives..
In general, nanotechnology education is conducted in post-graduate level and
the number of nanotechnology education programs within master’s and doctoral
programs increase constantly in many Universities. However, nanotechnology
education is very limited at undergraduate level in many natural sciences and
engineering programmes.
The books aimed at natural sciences and engineering undergaraduate students
as well as young students provide a complete review of all relevant aspects from
the nanotechnology and applications perspectives. The books provide practice-
based knowledge at undergraduate level through creating awareness of this
subject area and also support visual and e-learning in degree schemes that rela-
te to nanotechnology materials.
The Book 1 is devoted to provide a theoretical description of the basic principles
and fundamental properties of nanotechnology.
The Book 2 is devoted to presenting the characterisation techniques, micros-
copy, spectroscopy and application of nanotechnology for environmental, health
and safety issues.
We would like to thank very much to all researchers and authors who contribu-
ted to this two parts. We are deeply grateful to Erasmus+ Programme for fun-
ding the Universal Nanotechnology Skills Creation and Motivation Develop-
ment” KA203- Strategic Partnerships Project; 2016-1-TR01-KA203-034520 “
and the publication of these books.
Prof. Dr. Mustafa Ersoz, Editor

~7~
UNINANO PROJECT
You are reading Nanotechnology 2 book which is the one of the outputs of “Uni-
versal Nanotechnology Skills Creation and Motivation Development / UNINA-
NO” Project as numbered 2016-1-TR01-KA203-034520 supported by Turkish
National Agency under Erasmus+ Key Action 2 Strategic Partnership in the
field of Higher Education (KA203).
In UNINANO Project, Pamukkale University as coordinator and beneficiary
institution, Selçuk University and Afyon Kocatepe University from Turkey, Bru-
no Kessler Foundation and Cosvitec from Italy, Cluj-Napoca University from
Romania, and CCS from Greece have taken part.
To increase awareness of nanotechnology which is one of Turkey's 2023 strate-
gic goals has been the main objective of UNINANO Project. In line with this
main objective, written and visual educational materials have been prepared,
and aimed to contribute to the advancement of nanotechnology knowledge by
students and instructors using these materials. For this purpose, two course
books have been prepared in both printed and electronic versions, in both Tur-
kish and English:
 Nanotechnology 1: Fundamentals of Nanotechnology
 Nanotechnology 2: Characterization and Applications
The electronic versions of the books are available on the www.pau.edu.tr/uninano
project website. Additionally, the answers of the questions at the end of the book,
also located on the web page can be accessed from e-learning materials.
With the happiness of completing our project;
We would like to thank to the Presidency of Turkey's National Agency for sup-
port of our project.
We would like to thank to Rector of the Pamukkale University and Project Ma-
nager Prof. Dr. Hüseyin BAĞ for his valuable support during two years.
We would like to thank to Prof. Dr. Mustafa Ersöz, Dr. Mine Sulak, and Dr.
Massimo Bersani who worked scientific editoralship of the book, and Meltem
Balaban who worked in the book chapters' organization and book chapter aut-
horing. As well as, we would like to thank to Dr. Zeha Yakar, Dr. Cumhur Gök-
han Ünlü, and Dr. Volkan Onar, the other project team members of Pamukkale
University. In addition, we would like to thank to Dr. Yasemin Öztekin for her
valuable support for typsetting.
For their valuable effort and authoring, we would like to thank to all authors: Dr.
Arzu Yakar from Afyon Kocatepe University; Dr. Gratiela Dana Boca from Cluj-
Napoca University; Dr. Mustafa Ersöz, Dr. Gülşin Arslan, Dr. Serpil Edebali, and
Dr. İmren Hatay Patır from Selçuk University; Dr. Massimo Bersani, Dr. Mario
Barozzi, Dr. Erica Iacob, Dr. Giancarlo Pepponi, Dr. Lia Emanuela Vanzetti, Dr.
Rocco Carcione, and Dr. Giovanni Paternoster from Bruno Kessler Foundation.
We would like to thank to Ali Gökçe who prepared the UNINANO logo, Aydın
Uçar who prepared the cover design of the book, Can Kaya who helped in the
book's typographic,and the students of Pamukkale University Technology Fa-
culty who contributed to the project activities and meetings together with.
Dr. Arzum Işıtan
Project Coordinator
www.pau.edu.tr/uninano
https://www.facebook.com/UninanoPAU/
https://instagram.com/uninano_pau
https://twitter.com/Uninano_PAU
~8~
SECTION 1
INTRODUCTION
TO
NANOMATERIALS
CHARACTERIZATION

~9~
~ 10 ~
1.2 NANOMATERIAL CHARACTERIZATION

Massimo BERSANI
bersani@fbk.eu
FONDAZIONE BRUNO KESSLER
If you don’t “see” you cannot do it

INTRODUCTION
The characterization is a key point for the nanomaterial development from the
aspects of basic research to production activity. If nanomaterials and related
technologies have to achieve an effective accuracy level and efficiency a dedica-
te application of analytical techniques (nanometrology) cannot be overlooked.
The nanometrology have to allow a complete material characterization regarding
chemical and physical aspects, electrical and structural proprieties, thermal and
tribological characteristics etc. with a spatial resolution in three dimension aro-
und a nanometer or below. To understand nanomaterial characteristics and beha-
viors it is mandatory to develop and upgrade the analytical instrumentation and
related methodology. The basic research, the fundamental mechanisms unders-
tanding the applications development and industrial production monitoring requ-
ire a powerful and complete analytical approach. The terrific development poin-
ted out by microelectronic technology demonstrated the mandatory and indis-
pensable factors. In fact the microelectronic development has been characterized
from the beginning by the metrological support. Without the impact of dedicated
analytical techniques and specific methodologies the microelectronic era certa-
inly did not have the advances and the impact on our lives that we know today.
In the nanotechnology field the impact of analytical techniques will be, if it is
possible, more important.
In Figure 1.1.1 is reported a general underlying scheme of analytical techniques.
A well-defined probe is used to induced a local input in the samples. The sample
feedback is the emission of various signal from a specific region. The analysis
process is the registration of those signal by a suitable analyzer.
An important issue is related to primary beam modifications induced on the
sample. In general the input energy associated to the primary beam induced ef-
fects as: chemical reactions, diffusion, recrystallization, morphological defor-
mation.
~ 11 ~
Serious limitations can be also associated to the environment required by the
analysis. In fact many techniques required the use of an Ultra High Vacuum
environment not suitable with all kinds of samples, as for examples bio and
polymeric nanomaterials. Other limitation associated to the samples have, many
times, to be considered as for examples, insulating characteristics, overall morp-
hology, handling possibilities. In some case a large size sample is required.

Figure 1.1.1. Base scheme of characterization process

In this chapter the several analytical techniques used to characterize nanomate-


rial are introduced. The techniques are divided in three main areas: Microscopy;
Spectroscopy; Spectrometry.
The first one
 Microscopy, gives information on sample morphology and it allows to
determinate nanostructure, shape and size.
 Spectroscopy allows to obtain composition analysis and chemical infor-
mation.
 Quantitative and depth profile characterization is done with spectro-
metry techniques.
In the following part of the introduction and an overview of many analytical
techniques for nanomaterials are reported, in order to give an overall view.

1.1.1 Atomic Force Microscopy [AFM]


~ 12 ~
 Characteristic parameter: Van der Waals force
 Type of information: Surface topography and roughness; distribution of
magnetic and electric domains; elasticity and viscosity of the surface.
 Lateral resolution: 2-5 nm (vertical resolution ~Å)
 Environment: Air/vacuum/controlled atmosphere
 User skill level: High
 Time request for a measurement: from 15 minutes to hours
 Cost equipment: Medium

1.1.2 Auger Electron Spectroscopy [AES]


 Characteristic parameter: Electron energy spectrum
 Type of information: Elemental composition; map analysis; depth profi-
le
 Lateral resolution: 30nm
 Sensitivity: 0.1 at%
 Environment: UHV (Ultra-High Vacuum)
 User skill level: High
 Time request for a measurement: 3 hours
 Cost equipment: Medium/high

1.1.3 Fourier-Transform Infrared Microscopy (FITR)


 Characteristic parameter: molecular vibration
 Type of information: Elemental and molecular distribution
 Lateral resolution: 5 microns
 Environment: air
 User skill level: high
 Time request for a measurement: 30 minutes
 Cost equipment: medium

1.1.4 Helium Ion Microscopy [HIM]


 Characteristic parameter: emitted electron
 Type of information: morphology
 Lateral resolution: 0.3 nm
 Environment: vacuum
 User skill level: medium/high
 Time request for a measurement: 10 minutes
 Cost equipment: high
~ 13 ~
1.1.5 Dynamic Secondary Ion Mass Spectrometry [SIMS]
 Characteristic parameter: sputtered ions
 Type of information: elemental composition; mass spectra, depth profi-
le, line and map analysis
 Lateral resolution: 0.1-10 microns
 Sensitivity: ppb-ppm
 Environment: UHV(Ultra-High Vacuum)
 User skill level: High
 Time request for a measurement: from 5 minutes to several hours
 Cost equipment: High

1.1.6 X-ray Fluorescence Analysis [XRF, EDX]


 Characteristic parameter: Second X-ray Fluorescence
 Type of information: elemental composition
 Lateral resolution: 100 nm
 Sensitivity: 0.1 %at
 Environment: air/vacuum
 User skill level: medium
 Time request for a measurement: from few minutes to 1 hour
 Cost equipment: medium/low

1.1.7 Grazing-incidence X-ray Fluorescence


 Characteristic parameter: characteristic emitted X-ray
 Type of information: elemental composition; density; layer thickness
 Lateral resolution: 1 cm
 Sensitivity: 10E12 at/cm2
 Environment: air
 User skill level: high
 Time request for a measurement: 2 hours
 Cost equipment. Medium/high

1.1.8 Electron Backscatter Diffraction [EBSD]


 Characteristic parameter: Electron diffraction and absorption
 Type of information: Crystalline structure, orientation, strain, grains
morphology and deformation.
 Lateral resolution: 10-100nm
~ 14 ~
 Environment: High vacuum
 User skill level: High
 Time request for a measurement: hours
 Cost equipment. High

1.1.9 Scanning Electron Microscopy [SEM]


 Characteristic parameter: distribution and energy of scattered and emit-
ted electrons
 Type of information: Topography
 Lateral resolution: 0.5 nm
 Environment: Vacuum
 User skill level: Medium
 Time request for a measurement: 5-10 minutes
 Cost equipment: medium/high

1.1.10 Scanning Tunneling Microscopy [STM]


 Characteristic parameter: Spatial variation of electron tunneling current
 Type of information: map of surface electronic structure
 Lateral resolution: 0.1 nm
 Environment: UHV (Ultra-High Vacuum)
 User skill level: High
 Time request for a measurement: 1 hour
 Cost equipment: high

1.1.11 Static Secondary Mass Ion Spectrometry [S-SIMS]


 Characteristic parameter: sputtered atomic and molecular ions
 Type of information: Mass spectra; chemical image
 Lateral resolution: 0.1 microns
 Sensitivity: 10E9 at/cm2
 Environment: UHV (Ultra-High Vacuum)
 User skill level: high
 Time request for a measurement: 10 minutes
 Cost equipment: high

1.1.12 Surface Raman Spectroscopy


 Characteristic parameter: Optical emission
 Type of information: molecular absorption

~ 15 ~
 Lateral resolution: 10 microns
 Environment: air
 User skill level: high
 Time request for a measurement: 10 minutes
 Cost equipment: medium

1.1.13 Transmission Electron Microscopy [TEM]


 Characteristic parameter: electron scattering
 Type of information: morphology; crystal structure; defect distribution
 Lateral resolution: 0.1 nm
 Environment: UHV (Ultra-High Vacuum)
 User skill level: high
 Time request for a measurement: 1 hour
 Cost equipment. High

1.1.14 X-ray Diffraction and Reflection [XRD]


 Characteristic parameter: diffracted x-ray
 Type of information: surface crystal structure
 Lateral resolution: 0.1 mm
 Environment: air
 User skill level: medium
 Time request for a measurement: 5-20 minutes
 Cost equipment: medium

1.1.15 X-ray Photoelectron Spectroscopy [XPS]


 Characteristic parameter: photoelectron energy
 Type of information: elemental composition; chemical bonding; nanola-
yer thickness
 Lateral resolution: 3 microns
 Environment: UHV (Ultra-High Vacuum)
 User skill level: high
 Time request for a measurement: hours
 Cost equipment: high

1.1.16 X-ray Reflectometry [XRR]


 Characteristic parameter: X-ray intensity
 Type of information: layer thickness, density, interface roughness
 Lateral resolution: 100nm

~ 16 ~
 Environment: air
 User skill level: medium
 Time request for a measurement: until several hours
 Cost equipment: medium

~ 17 ~
~ 18 ~
SECTION 2
MICROSCOPY

~ 19 ~
2.1 SEM ANALYSIS
Mario BAROZZI
barozzi@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION

~ 20 ~
The Scanning Electron Microscope is a widely used non-destructive surface
analysis technique, which uses electrons both as a beam probe for surface
investigation and as a signal for the generation of the microscopic surface image.
The impinging electrons entering the sample are either elastic or inelastic
scattered in a pear-shaped interaction volume. A fraction of electrons can escape
from the surface and provide the topographic and compositional contrast
information that will eventually compose the SEM image. The surface signal
response is a fluence of electrons emitted in many directions from the target and
collected by detectors positioned in various configurations. The detector is not a
camera, it simply intercepts the electrons while the amplified electric signal is
later on converted into a brightness level. The SEM image is the composition of
a two-dimensional array of data, the sum of all consecutive brightness level
spots corresponding to the intensity of the electrons expelled point by point from
the specimen, collected by synchronizing the image pixels coordinates with the
PE raster positions in the ROI (region of interest).
What determines the magnification power in SEM is simply the ratio between
the fixed size of the screen where the surface image is displayed and the size of
the PE rastering area on the sample. Therefore the smaller the raster and beam
diameter, the bigger the magnification. The magnification power provided by
SEM is much higher than the one that can be obtained by optical microscopes as
electrons, with their wave-particle duality, can widely surpass the maximum
resolution imposed by the visible wave-length diffraction limit. Nowadays SEM
can reach five orders of magnification, from about 10x to 1millionx with 1nm of
resolution, thanks to well established electron gun sources, electron optics and
detectors.
The electron-matter interaction occurring in standard SEM analyses is manifold
and the various phenomena that occur are useful for additional analytical
techniques. The elastic scattering alters only the electron direction component,
whereas the inelastic scattering can involve many different processes dissipating
the electron kinetic energy in the target. The interaction volume occupied by all
possible electron trajectories can be modelled through the Monte Carlo
simulation. The signals generated can be SE, BSE, Auger electrons,
characteristic X-rays, Bremsstrahlung and fluorescent X-rays, cathode-
luminescence, and slight ESD. The characteristic signals originate in different
spots within the interaction volume, have different escape depths and therefore

~ 21 ~
provide different resolutions for the relative analyses. Besides the many
correlated techniques, the topic of this paragraph will focus on SEM.

Acronyms
BSE Backscattered Electron
CCF Cross Correlation Function
CD Critical Dimension
CFE Cold Type Field Emission Electron Gun
CL Chatodoluminescence
EBIC Electron beam induced current
EBL Electron Beam Lithography
EBSD Electron Backscatter Diffraction
EDX or EDS Energy dispersive (X-ray) spectroscopy
EME electron mirroring effect
ESD electron stimulated desorption
FEG Field Emission Gun
FFT fast Fourier transform
MIP molecularly imprinted nanoparticles
PE Primary Electron
ROI Region of Interest
SE Secondary Electron
SEM Scanning Electron Microscope
SNR Signal to Noise Ratio
TFE Thermal Type (assisted) Field Emission Electron Gun
UHV Ultra High Vacuum
WD Working Distance
WDS Wavelength Dispersive Spectroscopy

2.1.1 Instrumentation
It is worth noting that, in general, electronic images of adequate specimens can
be obtained without complex electron optics thanks to the special electron-
matter interactions. For example, low energy electron holography represents an
outstanding recent demonstration that nanometer resolution microscopy can be
achieved by exploiting a simple lense technique that uses a coherent low-energy
electron source [1].
~ 22 ~
A peculiar feature of SEM is the use of a focused PE beam as the probe and ROI
rastering. The first implementation of a modern SEM can be found in Zworykin
et al. in 1942 and its operating principles are still valid today. A basic SEM
instrument consists of a PE column with the electrons source and electron optics
operating in UHV condition, a sample stage that can often operate at lower
vacuum levels and one or more detectors for SE and BSE [2].
The most simple sources of PE are cathodes heated at temperatures high enough
to promote the thermionic emission of electrons. Alternatively, with FEG
operating at cold temperature, electrons tunnel through the work function barrier
and are emitted due to the strong electric field concentrated in a very sharp tip
the size of few nanometers. FEG provides the highest brightness and lower
energy spread, which translates into less chromatic aberrations, compared to
other electron sources like hot tungsten cathodes with thermionic emission or
lanthanum hexaboride LaB6. This difference is particularly significant at the
very low electron energies, often useful to reduce charging effects on insulators.
The drawback of FEG is its sensitivity to contamination from residual gas. To
reduce contamination, UHV conditions are mandatory and recurrent brief
heating (“flashes”) are applied to FEG to desorb gas molecules. Schottky FEG
represents a compromise solution, where electron emission is thermal-assisted in
order to achieve higher stability, with only small drops in performance.
The most common electron gun is an electrostatic lens composed of a cathode,
maintained at negative potential, that is the source voltage which defines the PE
energy, a Wehnelt maintained at slightly higher potential (i.e. more negative
charge) and an anode plate at ground. The electrons exiting the electron gun
focus in the first crossover and diverge immediately afterwards, thus further
electromagnetic lenses and mechanical apertures are inserted in the column to
finely direct the PE beam and control its shape on the specimen surface.
In SEM, it is desirable to have the highest beam collimation with the smallest
spot diameter on the specimen, which provides the best resolution and possibly
the enhanced sharpness level in the SEM image. In order to obtain high depth of
focus and minimal aberrations, electromagnetic lenses typically operate in the
column by increasing the convergence angle of the spiralling trajectories of
electrons. A condenser lens controls the amount of PE current and an objective
lens controls the final focus. Moreover, mechanical apertures can be
interchanged in order to decrease the convergence angle and aberrations at the
cost of the PE current. Scan coils guide the spiralling electron beam through the
~ 23 ~
final objective lens and produce the raster on the specimen surface when the
finally focused PEs exit the bottom of the pole piece.
The working distance, typically ranging from about 3 mm to 25 mm, determines
the separation between the final lens and the specimen. Usually the stage that
holds the sample is motorized with translational, rotational and tilt movements
and in the case of a tilted surface a dynamic focus can automatically provide the
necessary corrections on the PE alignment. The identification of the ROI is not
the only purpose of the stage movement. The specimen orientation and the
electrons take-off angle (related to the detectors) are critical factors that
determine SE intensity and can greatly improve SEM image quality. The SEM
image interpretation greatly depends on the detector location. The positioning of
the detectors can be either in-chamber or in-column, with different outcomes.
Electrons exiting the specimen with energies lower than 50 eV after inelastic
scattering are classified as SEs, whereas BSEs have energies ranging from 50 eV
to nearly the PE energy level. SEs coefficient δ and BSEs coefficient η
represent respectively the ratio of the number of SEs and the number of BSEs to
the number of PEs. SEs can be further classified based on their generation
mechanism.
Type 1 SEs are expelled from the specimen in coincidence with the PE incidence
spot, therefore they can provide high spatial resolution. Type 2 SEs are emitted
after multiple BSEs scatterings within the specimen, at relatively high distance
from the PE incidence spot for higher PE energies, therefore they produce either
a lower resolution signal or a background signal. Type 3 SEs are generated when
BSEs escape the specimen and hit the inner walls of the SEM chamber.

The SEM image contrast depends on both the surface morphology and the target
materials. SEs emission varies with surface geometry and escape region. The
term “edge effect” indicates that more SEs can escape when the PE beam hits
steep surfaces, thus the edges appear brighter and provide the typical
topographical appearance of SEM images. A noticeable consequence is that
tilting the target can enhance the contrast. Our brain can easily read a surface
topography when it is illuminated from above in visible light. Accordingly, in
order to simulate a condition of natural illumination, the in-chamber SE detector
is typically positioned at the top of the SEM image. This trick will allow the

~ 24 ~
electron detector to mimic a light source illuminating the microscopic specimen
from above.
In-lens detectors are also used. In this configuration, the detector is located
inside the final lens and collects mainly the electrons emitted normal to the
surface. Therefore it is particularly useful to collect electrons emitted also from
deep cavities. In-lens detection can also minimize the artefacts induced by
charging effects for non-conductive specimens. In-lens detectors equipped with
energy filters can discriminate SEs and BSEs.
BSE detectors are positioned in the column axis and are more sensitive to the
atomic number (Z) contrast, hence they can provide a qualitative discrimination
among different elements. Electron backscattering against heavier elements is
more efficient than for lighter elements, therefore the SEM image will appear
brighter in correspondence of the higher Z numbers. The backscattered electrons
have higher energy after elastic scattering and their escape depth can be a
hundred times greater than that of SE.
Common types of detectors are designed based on the Everhart-Thornley
configuration. Basically, this system consists of a scintillator plus a
photomultiplier, with a Faraday cage operating as an energy filter that enables to
discriminate BSEs from SEs [2]. Alternatively, in order to generate a signal,
solid state detectors based on p-n junctions exploit the electron-holes pairs
production in semiconductors when hit by electrons with suitable energy. This
small electronic signal requires further current amplification.
The specimen current that flows through the bulk of the target hit by the PE can
represent a detection signal as well.
Any detector measurement can be added to the acquisitions of other various
analytical techniques to form a single combined map. For instance, topography
and compositional images can merge, SE+BSE, EBSD+SE, EDX+EBSD or the
specimen current signal and so on [3].
The most relevant factors that determine the performance level in a SEM are the
beam diameter, the image resolution and sharpness. Besides the beam spot size
and shape, the ultimate spatial resolution of an electron microscope depends on
the interaction volume of the electron probe with the specimen. The PE
maximum penetration depths can range from few nanometres, when hitting the
bulk of elements with higher atomic numbers and for energies in the order of 1

~ 25 ~
keV, to few micrometres for lighter elements and with PE energies of 20 keV.
Typically, type 2 SEs can degrade image resolution, as they are generated far
from the incident beam.
The beam shape can be altered by astigmatism or chromatic (i.e. energy)
aberrations, whereas the interaction volume affects the SNR. Factors concurring
to the SNR include the PE beam fluence, the statistical nature of the electrons
collision with the sample and with the chamber walls, the take-off angle, the
efficiency of the detector, and the electronic signal amplification and processing.
Image pixel density is a fixed factor but it plays a role too. The maximum useful
magnification can be reached only if the PE beam spot size is smaller than the
pixel size, otherwise the image becomes blurred as the signal acquired within
each pixel will see partial contribution from adjacent spots.
A frequent artefact that reduces sharpness is the target contamination that occurs
when hydrocarbons molecules on the specimen surface or residual gases in the
analysis chamber interact on the ROI hit by PEs.

2.1.2 Application Cases


SEM is a widely proven and versatile technique that is still today an
indispensable metrology tool. Its overall simplicity in sample preparation is a
remarkable advantage over other surface analysis techniques.
In CD metrology, an ultimate spatial resolution is pivotal and it is in the order of
1nm or less. Therefore, in most cases when SEM is applied to nano-materials,
maintaining pristine surface conditions of the target at all times is essential. The
resolution is mainly determined by the PE probe size and energy. In order to
obtain the highest performance, cold FEG, with high brightness and less than
5nm source size, provides the high spatial and temporal coherency of the
electron beam required to obtain the adequate probe diameters. The pear shaped
interaction volume changes with the energy and the beam energy affects various
parameters like the sampling depth of the backscattered electrons, the SEs
coefficient, the charging effects. Other factors besides resolution become
relevant for SEM imaging, as in the application cases described below.
The right balance between the signal level necessary to obtain an adequate SNR
and the electron probe diameter must be found (Figures 2.1.1 and 2.1.2). As a
rule of thumb, a smaller spot size provides higher resolution. Mechanical
apertures are useful to reduce the spherical aberrations but they also cause the
~ 26 ~
PE current to decrease. Also, diffraction effects can occur for smaller apertures
diameters due to the wave nature of the electrons.
Periodic nano-ripples produced by gold ions implantation on Ge are reported in
Figure 2.1.1. The size of the structures ranges from micrometer scale for the
crest length to nanometer scale for the curled nanowires and the gold rich
nanoparticles decorating the ripples. This hierarchy structure can play a relevant
role in the cellular behaviour on nanostructured biocompatible scaffolds [4]. In
the case of conductive materials, the SEM image has the best SNR when high
PE energy is used to obtain a nanometer size PE diameter and when a low WD is
selected to maximize the SEs collection with the detector in-lens.
Similar conditions are applied to ZnO nanoparticles doped with Au, as reported
in Figure 2.1.2. These nanometer-scale powders are used in gas sensors [5]. In
this SEM image Au nanoparticles decorating the bigger ZnO crystals are
brighter and clearly visible, both due to the high gold BSE coefficient η and the
considerable edge effect occurring on smaller particles. At very high
magnifications, electromagnetic interferences or mechanical vibrations become
more relevant in the final image and artefacts may show as wavy irregular edges.
These artefacts do not depend on the electron optics but can be reduced only
with the implementation of external noise insulating systems.

Figure 2.1.1 Periodic nano-ripples produced by gold ions implantation on Ge

~ 27 ~
Figure 2.1.2 ZnO nanoparticles doped with Au

In SEM metrology many materials are not conductive. Moreover, plastics and
biological materials are affected by heat induced by the impinging PEs. MIPs are
shown in Figure 2.1.3 as an example. These nanoparticles can enhance the

~ 28 ~
surface plasmon resonance phenomena aiming to hormones detection [6]. In this
case, low PE energies are preferable in order to preserve particle shapes. In
general a satisfactory balance in secondary electrons yields can be reached,
without the occurrence of local charging effects, by using low energies for PEs
impinging on insulating particles. The nanoparticles should be deposited on a
conductive substrate at ground potential. For instance, a simple chip of silicon
wafer provides both good electron conductivity and a flat substrate with an even
morphology.

Figure 2.1.3 MIPs nanoparticles dispersed on silicon substrate.


Specimen charging in electrical insulating materials can affect both SEM
accuracy and reproducibility. At first, distortions and anomalous contrast can
appear in the SEM image, but in extreme cases EME can occur [7], when the
charge injected by electrons fluence in the target builds up to a level so high that
the equipotential surface produced around the charged volume will elastically
reflect back the PE beam. As a result the PE rastering will produce an image of
the inner walls of the SEM chamber rather than of the ROI.
Sputter coating of ultrathin metal layers with thickness in the order of 1-5 nm
can suppress the surface charging. Moreover high Z number conductive
coatings, like Pd, Au or W, are useful to enhance the SE yield on low atomic
number targets. When trying to achieve CD resolutions even very thin films can
alter the appearance of the surface morphology. Therefore, alternative methods
like the low PE energy “gentle beam” are necessary to obtain the charge balance
without coatings. In this case, the PE beam can be delayed by applying a
decelerating electric field to the target until it impinges on the specimen with
energies as low as 100 eV. In more general cases, and for specific insulating
materials and PE fluence, adequate PE energies can be selected to establish an
equilibrium between the number of electrons injected in the interaction volume
and the SEs plus BSEs escaping the surface. By the equilibrium conditions the
total emitted electron coefficients will be 1.

~ 29 ~
SiO2 is an example of common insulating material as seen in Figure 2.1.4. SEM
imaging of an inverse opal scaffold deposited on silica substrate is not
straightforward. The pristine polystyrene nano-spheres used to deposit
multilayer films dissolve after an annealing process and only the insulating opal
framework remains. Local and not homogeneous charging effects can deflect the
PE beam and distort the resulting SEM image.

Figure 2.1.4 Image of opal framework of SiO2


In the case reported in Figure 2.1.4, conductive coatings are not deposited [8].
However, PE low fluence and a rapid scan with the integration of many frames
are necessary measures to avoid excessive surface charging. When the PE beam
hits the inverse opal, some charge dissipation is promoted by the particular
structure of the scaffold. The inverse opal configuration favours SE emission,
therefore a charge balance condition between impinging and escaping electrons
can be reached. Otherwise, EME is rapidly induced if the PEs hit the bulk of the
silica substrate.
High energy PEs are less sensitive to deflections induced by local charging.
Moreover, the reduced WD improves SEs collection in-lens. Moreover, the
reduced WD improves SE collection by the in-lens. The in-lens BSE detector
can collect the electrons emitted from deep cavities; in this way even the third

~ 30 ~
level of pores is discernible in depth through the windows visible in the opal
scaffold.
In the past, SEM image resolution was defined as the smallest width of the
measurable particles or as the spacing between them. More recently,
diffractograms were proposed as a way to determine resolution: they basically
consist 2-dimensional representations of the SEM image spatial frequency by
using fast Fourier transform FFT or by using the CCF cross correlation function.
High resolution at high PE energies is mainly determined by type 1 SEs, when
operating at high magnifications, since type 2 SEs emitted far from the PE
incidence spot contribute only for a random noise. High resolution at low PE
energies, i.e. at less than 5 keV or at about 1 keV, sees a much higher
contribution of type 2 SEs since the interaction volume is reduced.
When SEM is applied to nanosheets (Figure 2.1.5), lateral resolution becomes
less relevant, with respect to the ability to resolve different thickness appearing
in greyscale levels. In general, surface details are obscured if the electron beam
penetration is increased, i.e. for higher PE energies. In this case, the main goal of
SEM imaging is to obtain a considerable enhancement in the slight contrast
provided by each atomic layer. Here graphene oxide sheets deposited on silicon
are shown. In the case of very thin sheets, relatively low PE energy provides
good contrast on the layers when using the in-chamber detector positioned at
intermediate take-off angle for SEs. In this condition, each superimposed carbon
monolayer yields an area with a bit darker level by shielding a number of low
energy SEs.

~ 31 ~
Figure 2.1.5 Graphene sheets outstretched on silicon substrate

Apart from biological tissues, metals, geological specimens and many other
kinds of samples which are not treated here, SEM is extensively used in many
phases of semiconductor manufacturing, from production lines to device
inspection, failure analysis or reverse engineering. Various metrology issues in
CD-SEM are still open and need to be addressed when aiming at a
standardization on the reference nano-materials, based also on other techniques
like AFM. Chemical microanalysis or microstructural capabilities can be easily
added to SEM by introducing in the specimen chamber complementary
techniques such as EDX (EDS) or WDS and EBSD. Some other techniques
strictly related to SEM, which are not discussed here but that should be
mentioned, are chatodoluminescence CL, EBIC, magnetic contrast and EBL. In
a process of multi-techniques cross comparison analysis in the nanomaterials
field, we need to ensure the accuracy, reproducibility and traceability chain.

~ 32 ~
References
[1] Low energy electron holographic imaging of individual tobacco mosaic
virions, JeanNicolas Longchamp, Tatiana Latychevskaia, Conrad Escher, and
HansWerner Fink, Applied Physics Letters 107, 133101 (2015)
[2] Scanning Electron Microscopy and XRay Microanalysis, J.I. Goldstein et
al., Plenum Press New York.
[3] Advanced Scanning Electron Microscopy and XRay Microanalysis, Dale E.
Newbury et al., Plenum Press New York.
[4] Rossana Dell'Anna, Cecilia Masciullo, Erica Iacob, Mario Barozzi, Damiano
Giubertoni, Roman B¨ottger, Marco Cecchini and Giancarlo Pepponi; Multiscale
structured germanium nanoripples as templates for bioactive surfaces, RSC
Adv., 2017, 7, 9024–9030
[5] Gaiardo, A., Fabbri, B., Giberti, A., Guidi, V., Bellutti, P., Malagù, C., Valt,
M., Pepponi, G., Gherardi, S., Zonta, G., Martucci, A., Sturaro, M., Landini, N.
ZnO and Au/ZnO thin films: Room temperature chemoresistive properties for
gas sensing applications (2016) Sensors and Actuators, B: Chemical, 237, pp.
10851094.
[6] Lucia Cenci, Erika Andreetto, Ambra Vestri, Michele Bovi, Mario Barozzi,
Erica Iacob, Mirko Busato, Annalisa Castagna, Domenico Girelli and
Alessandra Maria Bossi; Surface plasmon resonance based on molecularly
imprinted nanoparticles for the picomolar detection of the iron regulating
hormone Hepcidin25.
Journal: Journal of Nanobiotechnology. MS: 3155145631487099
[7] Clarke, D.R. & Stuart, P.R. (1970); An anomalous contrast effect in the
Scanning Electron Microscope. J. Phys. E: Sci. Instrum. 3, 705707.
[8] Glass Micro- and Nanospheres: Physics and Applications. Giancarlo C.
Righini, Ed. Pan Stanford, 2018.

~ 33 ~
2.2 SCANNING PROBE MICROSCOPES (SPM)
Erica IACOB
iacob@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
Back in the early 1980s G. Binning and H. Rohrer dazzled the world with the
first real-space atomic-scale images of surfaces. Their idea was to apply the
tunnelling effect to a device/system in order to “see” the surfaces with atomic
resolution. This discovery earned its inventors the Nobel Prize in Physics in
1986. Microscopy based on the tunnelling effect is called Scanning Tunnelling
Microscopy (STM). STM is the ancestor of all scanning probe techniques
(SPMs).
SPM is considered one of the modern powerful research techniques that allow
capturing surface information such as morphology and other local properties in a
relatively easy way. SPMs are used in a wide variety of disciplines, including
fundamental surface science, routine surface roughness analysis, and spectacular
three-dimensional imaging from atoms of silicon to micron-sized protrusions on
the surface of a living cell. In some cases, scanning probe microscopes can
measure physical properties such as surface conductivity, static charge
distribution, localized friction, magnetic fields, and elastic moduli. Hence, SPM
applications are very varied.
All SPM techniques are based on two fundamental components: the probe and
the scanner. Probes can be described as needles (tip apex radius 5-10 nm) that
scan the surfaces at a certain distance (0.1-10nm). Based on the various
techniques, they can be made of tungsten, platinum-iridium, gold (STM), silicon
(AFM), Ti or Pt coated silicon (SCM, SKM, SEM), Ni or Co magnetic coated
silicon (MFM). When an SPM probe is placed in close proximity to the surface,
the sensed interaction can be correlated to tip position and as the tip scans the
surface, a 3d map is created. The positioning control of sample and/or tip
depends on the scanner. All SPM scanners are based on piezoelectric ceramic
material. Piezoeletric materials change their dimensions in function of applied
voltage. This allow to control in a very precise way probe-sample distance and
the position of the probe over the surface.

~ 34 ~
Acronyms:
AFM Atomic Force Microscope
EFM Electric Force Microscopy
FWHM Full Width at Half Maximum
LFM Lateral Force Microscopy
SPM Scanning Probe Microscopes
STM Scanning Tunnelling Microscopy
SCM Scanning Capacitance Microscopy
RMS Root Mean Square

2.2.1 Instrumentation
Scanning Tunnelling Microscopy (STM)
Scanning Tunnelling Microscopy (STM) provides information on the
topography of a surface by measuring the tunnelling current occurring between
the tip and the sample surface. This technique allows to measure conductive
samples such as metals or semiconductors only, but it is very powerful as it can
obtain true atomic resolution on some samples even at environment conditions.

The instrument is based on a sharp conductive tip that scans the surface from a
distance of only a few angstroms. The main STM techniques are “Constant
Current” or “Constant Height” modes for "topographic" data acquisition. When
bias voltage is applied between the tip and the sample, tunnelling current occurs.
In Constant Current mode (CCM) the scanner keeps the current constant by
feedback circuit. So vertical movement of the scanner (feedback signal) reflects
surface topography. On the contrary, in Constant Height mode (CHM) the
scanner of the STM moves the tip horizontally only, hence current between the
tip and the sample surface varies according the sample relief. With this mode a
higher speed can be obtained as feedback on tip height is not necessary.
However, CHM can only be applied if the sample surface is very flat, since
surface corrugations higher than 5-10 Å can seriously damage the tip. This
technique can be applied to conductive surfaces or thin nonconductive films and
small objects deposited on conductive substrates.

~ 35 ~
Atomic Force Microscopy (AFM)
In Atomic Force Microscopy the sample is probed by a silicon tip. This stylus,
with a tip apex often less than 10 nm, is mounted on the free apex of a silicon
cantilever that is usually 80 to 300 micron long. The red spot of a laser diode is
reflected from the backside of the cantilever (Figure 2.2.1)). Tip position is
obtained/acquired by the position of the laser diode red light reflected by the
cantilever on a photodiode screen. When scanning, the feedback system
minimizes the deflection by adjusting the vertical position of the sample. The
AFM lateral resolution is determined by the tip apex dimension and by the
sensitivity in detecting the spot laser position on the photodiode.
The main force occuring between tip atoms and sample atoms is an interatomic
force called van der Waals force. Depending on the sample-tip distance, two
measuring modes are possible: in the contact method, the tip slides very close (a
few angstroms) from the surface, originating the repulsive interatomic force. In
the non-contact method, the cantilever is held tens to hundreds of angstroms
from the sample surface and the interatomic force produces the attraction
[Garcia et. all, 2002]. In addition to the van der Waals force, other forces occur.
For instance, in the contact mode the capillary force plays a critical role since the
thin water layer that is often present in the environment holds the tip attached to
the sample surface. In absence of external field the dominant forces are van der
Waals interactions, short-range repulsive interactions and long range adhesion
forces but also capillary forces and elastic cantilever forces. In short, the distance
regime (i.e., the tip-sample spacing) determines the type of force that will be
sensed.
The contact mode is preferred when atomic scale images are needed as in this
mode the tip is in close contact with the sample and a better lateral resolution
can be achieved. Since a strong mechanical interaction occurs between the tip
and the sample surface, the contact mode is suitable for hard and relatively flat
surfaces but not appropriate for soft samples such as organic or biological
objects.
In the non contact mode, the system forces the tip to vibrate (close to the
cantilever resonance frequency) near the sample surface at a distance between
tens to hundreds of Angstrom. Vibrating scanning modes include the noncontact
mode, the intermittent-contact mode, the oscillatory technique, the semi-contact
mode, the tapping mode, etc. They differ in the distance at which the tip is kept
~ 36 ~
from the surface and in the tip oscillation feedback control. The limited contact
with the surface reduces tip wear and tear and surface damage. For this reason,
this technique is suitable for any kind of sample, from soft to hard in a wide
range of sample topography.

Figure 2.2.1. Schematic view of the Atomic Force Microscopy technique.

Artifacts and Resolution: Since AFM is a “contact technique”, many factors


can affect image resolution. The two main ones are tip interaction and scanner
properties.

Figure 2.2.2. Graphic representation of tip-sample scanning resolution.


If we consider tip influence, it is a fact that the final picture (profile scan) is a
convolution of tip apex size, cone angle and dimension of surface morphology
[Eaton et. al, (2010)]. In Figure 2.2.2 it is shown how tip cone angle can affect

~ 37 ~
final AFM image resolution: A) a tip scans a sphere attached on a surface. Tip
apex dimensions and tip cone angle cause a broadening of the measured profile,
compared to the real sphere dimension. B) and C) show the scan profile of two
sphere attached to a flat surface: the scans are performed by two tips with
different cone angles. In B) the sharp tip can resolve the two shapes, in C) the
two spheres are not resolved by the dull probe.
Moreover, any unpredictable damage of the tip apex (double/multiple tip, a
fractured tip, particles attached from surface causing a dirty/contaminated tip,
blunt tip) can cause morphological artefacts on final AFM image (Figure 2.2.3).

Figure 2.2.3. Artefacts due to a broken tip.


Many other factors can interfere with the scanning operation. They are due to
peculiarities of the piezoelectric scanner and are: creep, hysteresis and scanner
drift (that can cause image distortion), and edge overshoot (that can cause an
increased measurement of step height).
It is also worth mentioning other causes of artefacts unrelated to tip and
scanners. We can list, for instance, background bow/tilt, due to the intrinsic
curved motion of the probe during scanning operations (frequent) or the intrinsic
tilt due to sample mounting. Both this artefacts can be corrected with a 1st and
2nd order plane subtraction.
Any of the above artefacts can occur on a daily basis. They must be identified
and removed as better as possible by electronic correction, post processing
software correction, tip changing, sample grounding, etc. Experience can
provide guidance in finding the best solution.

~ 38 ~
Electric Force Microscopy (EFM)
In Electric Force Microscopy (EFM) the probe, a metal-coated silicon tip, can
‘feel’ some electric properties of the surface [Stangoni, 2005; Girard 2001]. In
main configuration, a grounded sample is scanned by a DC biased cantilever.
The opposite operation is also possible: a DC biased sample is scanned by a
grounded cantilever. In this way, it is possible to obtain both a topographic
image and a spatial distribution of the electric forces.
The EFM measurement is obtained either with a single scan or with the so called
“two-pass technique”. The first method allows to obtain both topographic and
electric information in a single scan, while, in the “two pass technique” the
measurement is performed in two phases. In the first scan, performed in contact
mode, the tip acquires the surface morphology, then, the tip is raised at a
constant distance from the surface (10-100 nm) and the EFM measurent is
performed. The “two-pass technique” allows to exclude the topographic
influence during measurements and reduces tip damage that could be caused, for
example, when removing the conductive coating layer from tip apex. This
method enables to study the conductivity and electric pattern of sample surfaces,
such as semiconductor devices and composite conductors.

Scanning Capacitance Microscopy (SCM)


Scanning Capacitance Microscopy (SCM) is another technique that allows
collecting electric information on material surfaces. SCM images sample
capacitance distribution. A metal-coated tip is needed in this technique. The
measurement is performed with the “two-pass technique”. During the first pass,
the tip - in semi-contact mode - collects information on the topography of the
sample surface; in the second pass, the tip operates in tip-sample constant height
mode. A time-varying bias voltage is applied between the metal-coated tip and
the sample. As the probe-sample separation is kept constant, the variation in tip
vibration amplitude is related to variation in probe-sample capacitance. The scan
is performed all over the selected area and the resulting variation in capacitance
is mapped. This technique is widely applied in the semiconductor industry.
Many applications like dopant distribution maps, failure analysis, variations in
the thickness of a dielectric material on a semiconductor substrate, sub-surface
charge- carrier distributions can be obtained [Stangoni, 2005; Girard 2001].

~ 39 ~
Kelvin Probe Force Microscopy (KPFM) or Scanning Sur-
face Potential Microscopy (SSPM)
Scanning surface potential microscopy (SSPM), also known as Kelvin Probe
Force Microscopy (KPFM) is a method used to obtain information on surface
potential distribution. The electrical signal is acquired by using a metal-coated
silicon tip. The scanning is performed with the “two-pass technique”. During the
first pass, the tip acquires the topography of the sample surface in semi contact
mode (mechanically excited at its resonant frequency), in the second pass the tip
is raised at a fixed distance from the sample, the tip is electrically excited at its
resonant frequency and a DC bias and an AC component is applied to the
cantilever. The DC component is adjusted in order to nullify the oscillation
amplitude of the tip. When this condition is satisfied, this means that the DC
component equals the local surface potential. By scanning the sample surface it
is possible to obtain its potential map.
KPFM allows to obtain information on the electrical properties of metallic
nanostructures. Moreover, high-resolution KPFM has been used to probe
semiconductor devices in order to provide high-resolution potential profiles
[Wilhelm et. al, 2011] as well as to investigate electronic properties of defects on
clean semiconductor surfaces.

Magnetic Force Microscopy (MFM)


This technique can map the magnetic domain in magnetic materials. It usually
requires silicon or silicon nitride tips coated with a thin magnetic film of Co or
CoCr. In order to minimize topographic influence, the measurement is
performed with the “two-pass technique”. After the acquisition of the surface
profile in the first scan, the tip is raised at a fixed distance from the sample and
moved over the surface following the surface topography contour. If the distance
is “big enough” the tip is not affected by surface topography influence but
“feels” just long range forces such as, in this case, the magnetic forces of the
sample. The controller registers the amplitude and the phase variation of the
cantilever oscillation that depends on the spatial variation in the magnetic field.
MFM allows to observe magnetic domains whose sizes vary from several to
several tens of nanometres. Applications range from studies of magnetism in
rocks to magnetic material inclusions, MFM of hard disks, etc.

~ 40 ~
Lateral Force Microscopy (LFM)
When the tip slides on a sample surface, Lateral Forces are generated. They can
be considered drawbacks if the study aims to obtain topographic information.
Moreover, if the sample is soft, the tip, scanning in contact mode can scratch the
surface and can collect adsorbates particles. On the contrary, if the sample is
hard, like silicon or metal, the tip can slips modifying image resolution or
introducing some artefacts. Hovewer the torsion motion of the cantilever can be
used to collect information on changes in chemical composition of surfaces.
In lateral force mode (or torsion mode, or frictional mode) the system records
information on the forces exerted upon the probe tip in the lateral direction as it
scans across a surface. This information is collected in contact mode, together
with topography. If a surface is perfectly flat, variation in the phase signal can
provide information on changes in composition or on variation in frictional
forces. It is also possible to provide quantitative information on friction values if
tip and cantilever dimensions, as well as cantilever spring constant are known.
This method can be applied to many different materials such as semiconductors,
polymers, thin film layers, data storage devices in order to study surface
contamination, chemical speciation and frictional characteristics in the
nanotribology field.

2.2.2 Application cases


As explained in the previous section, AFM is a versatile technique that finds
application in different areas. AFM can help in metallurgy to determine final
production surface aspects. It can be applied to biologic samples, for its
possibility to investigate cells and molecules in liquid and physiological
solution, or can be applied to microelectronic materials to investigate
morphology but it can also give information on surface conductivity or dopant
active areas. The example below gives an idea of some of its applications.

Determining nanoparticles size


In this example, acrylamide based nanoparticles (NPs) were used to target the
hormone Hepcidin-25 that can give information on iron dis-metabolism and
doping. These particles were produced by precipitation polymerization and a
post-production size characterization was required. Since the particles were
provided in a high-density aqueous solution, sample preparation was the
bottleneck of this study case.
~ 41 ~
First, the authors deposited a drop of NPs solution on a silicon substrate and
dried it in vacuum. Figure 2.2.4.A shows the area covered by the substrate. NPs
are deposited in a continuous layer and due to their density, determining their
dimension (height and FWHM) from the cross section is not reliable. The
authors decided to dilute the solution. After two successive 1:10 dilutions, the
deposited particles appeared isolated (figure 4B) and cross section analysis can
give the correct diameter and height information (a statistical analysis was
performed) [Cenci et. al, 2015].
The AFM images were acquired with a Solver Px Scanning Probe Microscope
from NT-MDT. AFM data were acquired in semi-contact mode with a silicon tip
(~5.5 N/m, ~120 KHz) with a nominal radius of less than 10 nm. Analyses were
performed with a scanning areas of 1x1µm2.

Figure 2.2.4. AFM images of nanoparticles attached to a flat silicon surface. A)


as received, high density. On the right, the cross section of the white line across
some nanoparticles. B) dispersed after 1:100 dilution. On the left, a zoomed
area. On the right a 1x1 micron scan area, z range in the vertical bar.

~ 42 ~
Stainless steel
Topographical studies can often provide information about other bulk properties
such as hardness or friction. In particular, surface roughness can affect the
measurement of hardness. Coatings based on amorphous carbon, the so called
diamond-like carbons (DLC) coatings, are widely used to increase hardness in
the original material. Moreover, DLC coatings allow to increase wear resistance
in components subjected to severe working conditions, have good corrosion
resistance and high biocompatibility. Topographic analysis at the micro- or
nano-scales are essential for functional thin coatings characterization [Borrero et
al, 2010]. In this example, the required analysis was the morphological
characterization of steel surfaces coated with DLC prepared in various
conditions. The work [Onorati et al., 2017], in fact, was dedicated to presenting
an original alternative method to evaluate nano-hardness other than the
conventional use of micro indenter. The possibility to evaluate coating hardness
was tested by using ion beam sputtering through Secondary Ion Mass
Spectrometry (SIMS).

Figure 2.2.5. AFM images, 10x10 scan areas. Image A) Pristine widia surface
(RMS=9 nm), images from B to D show the same surface with three DLC
coatings [Onorati et al., 2017]. RMS is 155.6 nm, 95.6nm and 20.5 nm
respectively.
~ 43 ~
Figure 2.2.5.A shows the original substrate (widia, a very hard material usually
used in cutting tools and other industrial applications) while figures 5B, C, D
show three different DLC coatings. The morphology of the surfaces of the
different coatings was characterized with an Atomic Force Microscopy (AFM)
instrument from NT-MDT (UniSolver). Analysis was performed in semi-contact
mode with a silicon tip with a nominal radius of less than 10 nm. Sample scans
were performed in different positions with 20x20, 10x10, 5x5 and 1x1 μm2 scan
size. For each scan area acquired, we measured the average surface roughness
(Sa) and the root mean square surface roughness (Sq) [UNI EN ISO 4287, UNI
EN ISO 4288].

Electrical measurements
Sometimes materials are important not only for their nanoscale structure but for
other physical properties. Graphene based coatings are becoming very appealing
lately since graphene properties can substantially modify bulk properties of
sublayer materials. One of the primary reasons for the interest in graphene
materials is its impressive electrical properties but also mechanical properties
such as high strength and hardness as well as low friction. The example in
Figure 2.2.6.A shows an AFM height image (500x500 nm2) of silicon periodic
ripples. Structures show a period of 20-30 nm and a ripple height of 2-3 nm. The
image to the right, Figure 2.2.6.B, shows the SKM image of the same area. No
surface potential signal is obtained.

Figure 2.2.6. A) AFM image on nanostructured silicon surface, B) SKM image


on the same surface. 500x500 nm2.

~ 44 ~
Figure 2.2.7 shows the same sample with a graphene monolayer deposited by
CVD. The image on the left was performed in semi-contact mode, the one on the
right was acquired in SKM mode. In this case, where graphene layer was
present, a surface potential signal was generated.
Analysis was performed with a Solver Px Scanning Probe Microscope from NT-
MDT. AFM data were acquired with a Pt coated silicon tip (~11.8 N/m, ~240
KHz) with a nominal radius of 35 nm. First pass height scan was performed in
semi-contact mode. In the second pass, surface potential data were acquired
rising the tip at 10nm from surface profile, in SKM mode.

Figure 2.2.7. A) AFM image on Graphene on nanostructured silicon surface, B)


SKM image on the same surface. 500x500 nm2.

~ 45 ~
Conclusions
STM is a very powerful technique that allows a rapid and relatively inexpensive
investigation of sample morphology on the atomic-nano scales. However, this
technique can be applied only to conductive samples, and often requires a
vacuum environment and an active isolation from external ambient vibration. On
the contrary, AFM is a versatile technique that is suitable for any kind of sample
with morphology roughness in the range of a few microns. In fact, AFM allows
to visualize, in 3D image, features in the range of few nanometer size including
atoms and molecules on a surface. In the recent years many applications have
been developed to measure other surface properties together with morphology
by varying tip coating and feedback control. In addition to physical dimension, it
is possible to analyse: hardness, friction, electrical or magnetic signal; and also
to manipulate (move) object across the surface.

~ 46 ~
References
[Mironov, 2004] Mironov V. L. (2004), Fundamentals of scanning probe
micros-copy, the Russian academy of sciences institute, Retrieved from http://ip-
ras.ru/UserFiles/publications/mironov/Fundamentals_SPM.pdf.
[Garcia, 2002] Garcia R. and Perez R. (2002), Dynamic atomic force
microscopy methods, Surface science reports 47, 197-301.
[Bowen, (2009)] Hilal N. (2009), Atomic Force Microscopy in process
engineering. An Introduction to AFM for Improved Processes and Products.
Hardcover ISBN: 9781856175173 eBook ISBN: 9780080949574.
[Stangoni, 2005] Stangoni M. V. (2005), Scanning Probe Techniques for Dopant
Profile Characterization, Diss. ETH No. 16024, retrieved from http://e-collec-
tion.library.ethz.ch/eserv/eth:28140/eth-28140-02.pdf.
[Girard, 2001] Girard P. (2001), Electrostatic force microscopy: principles and
some applications to semiconductors, Nanotechnology 12(4), 485-490.
[Wilhelm, 2011] Melitz W., Shen J., Kummel A. C., Lee S., (2011), Kelvin
probe force microscopy and its application, Surface Science Reports 66, 1–27.
[Eaton, 2010] Eaton P. and West P. (2010), Atomic Force Microscopy,
ISBN:9780199570454.
[Cenci, 2015] Cenci L., Andreetto E., Vestri A., Bovi M., Barozzi M., Iacob E.,
Busato M., Castagna A., Girelli D. and Bossi A. M., (2015). Surface plasmon
resonance based on molecularly imprinted nanoparticles for the picomolar
detection of the iron regulating hormone Hepcidin-25, J. Nanobiotechnol 13:51
DOI 10.1186/s12951-015-0115-3.
[Onorati, 2017] Onorati E., Iacob E., Bartali R., Barozzi M., Gennaro S., Bersani
M., (2017) Experimental study by Secondary Ion Mass Spectrometry focused on
the relationship between hardness and sputtering rate in hard coatings, Thin
Solid Films 625, 35–41.
[UNI EN ISO 4287] UNI EN ISO 4287, Geometrical Product Specifications
(GPS) -- Surface texture: Profile method -- Terms, definitions and surface
texture parameters.

~ 47 ~
[UNI EN ISO 4288] UNI EN ISO 4288, Geometrical Product Specifications
(GPS) -- Surface texture: Profile method -- Rules and procedures for the
assessment of surface texture.
[Borrero, 2010] Borrero-Lopez O., Hoffman M., Bendavid A., Martin P. J.
(2010), Substrate effects on the mechanical properties and contact damage of
diamond-like carbon thin films, Diamond and Related Materials, 19 1273–1280.
[Iacob, 2016] Iacob E., Dell'Anna R., Giubertoni D., Demenev E., Secchi M.,
Böttger R., Pepponi G., (2016) Nanofabrication of self-organized periodic
ripples by ion beam sputtering, Microel Eng, 155, 50–54.

~ 48 ~
2.3 HELIUM ION MICROSCOPY (HIM)

Massimo BERSANI
bersani@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
In the middle 2000’s a new scanning microscopy has been introduced, it uses a
beam of helium ions which is focused and scanned across the sample. The
Helium Ion Microscopy (HIM) main features and related applications are similar
to a classic scanning electron microscopy (SEM), the primary beam in this case
are ions and the secondary beam emitted electrons. However HIM present many
advantages respect SEM techniques:İlk olarak, elektron maddesine kıyasla tüm
etkileşim iyonu maddesi tamamen farklıdır. İyonlar kırılma etkilerine maruz
kalmazlar ve madde ile olan etkileşim oylumu daha kısıtlıdır [1].
➢ First of all the interaction ion-matter is completely different in
comparison to electron-matter. Ion don’t suffer diffraction effects, and
the interaction volume with matter is more limited [1].
➢ Ion produces more secondary electrons (SEs) per incident particle, so a
faster acquisition is possible
➢ The helium ion source offers high brightness (4 x 109 A/cm2sr) and a
small energy spread (ΔE/E ~ 3 x 10-5), and hence allows the beam to be
focused to small probe sizes [2].
 HIM presents a sub nanometric resolution comparable with TEM
performances, but the sample preparation is more easy with a wider field
of applications
HIM presents many advantages also in comparison with other ion microscope
instrumentation. In fact microscopes based on gallium liquid metal ion source
(LMIS) present several limitations about lateral resolution [3] sputtering and
sub-surface damage[1].
HIM technique is based on three different stages:
1. Helium ion production
2. Beam formation and control
3. Sample interaction
~ 49 ~
About sample interaction the production of secondary electron allow, as we said,
a high performance microscopy images with a resolution around 0.3nm.
Rutherford backscattering He ions can also detected to carry out composition
data, but this approach doesn’t seem not effective [4; 5].X-ray fluorescence [6]
and SIMS effect [7] have been also observed, they represent a possibility to
obtain a chemical information in high 3D resolution mode, but at the moment we
are in a development phase and only prototype instruments are available. A
single beam is selected and focalize on the sample by mechanical slits and
electrical fields.

2.3.1 Principles
Helium Ion Microscopy uses a focused single He ion beam as primary specimen
and the monitored signal are secondary emitted electron, with a energy around 3
eV. The primary acceleration voltage is usually fixed between 5-35 keV. The
best probe size is around 0.3 nm, that represent the limit of lateral resolution
techniques. Also Ne beam can be also used by changing source inlet gas.
The schematic of the technique is reported in Figure 2.3.1. Use of low energy
electron beam to flood the positive charge accumulated areas (by He+) on the
specimen surface to neutralize the charges.

Figure 2.3.1. Orion instrumentation schematic

~ 50 ~
The interaction between Ion and matter is completely different respect electron
matter interaction as reported in Figure 2.3.2. With ion the emitting region is
more limited indeed the lateral resolution is improved and directly dependent by
probe size.

Figure 2.3.2 Electron (left) ion right matter interaction.

Key techniques attributions are:


High Resolution Imaging
● Sub nm Probe size
● Brightness - 5x FSEM
● Large Depth of Focus
Total Sample Information
● Topographic Contrast
● Z Contrast
● Voltage Contrast
Advanced Charge Control
● Low Probe Current
● Surface Charge Immunity
● Full Beam Energy Operation
● Electron Flood - an Option

~ 51 ~
Ions have a lover velocity than electrons at the same energy. Therefore the ions
wavelength is smaller as it pointed out by de Broglie equation:

Smaller wavelength also contributes to have a better resolution combined with


large depth of field.
The technique can be also used to perform 3D Nanofabrication of sub-10nm
structures by He and Ne beams increasing Ion beam current. It is also possible to
perform rapid prototyping with Ga complementary beam.

2.3.2 Instrumentation
The Core of HIM instrument is the gas field ionization source (GFIS). The bases
of helium ion source have been developed by E. Muller at the beginning of
1950’s in Berlin [8]. The source consist of a single crystal metal which is
fabricated in a needle shape (emitter). A large positive voltage is applied to the
ion source, in order to have an electric field at these apex atoms of a few V/Å.
The electric field strength is not quite sufficient to induce the apex atoms
evaporation [2]. The emitter with a suitable face end geometry produces just
three beamlets, called trimmer. A single beam is selected and focalize on the
sample by mechanical slits and electrical fields. In the 2006 Zeiss sold the first
commercial HIM, named Orion, at NIST [8].
The current Zeiss version is named Oraion Nanofab [9] and it can use two
different gas sources, Helium and Neon. The instrument is also equipped by a
Gallium column for high rate ion milling. By Orion instrument, further to obtain
terrific nano images; has been also used to produces nano structures. Several
Nanofabrication applications are recently published in several fields:
microelectronics; photonics; plasmonics; biology; sensors; graphene et al.
[10;11;12;13;14].
Orion instrument can also analyze nonconductive sample by using low energy
electron beam to flood the positive charge accumulated areas (by He+) on the
specimen surface. It is no necessary to coat the specimens with a conductive
layer (like in SEMs), inducing possible artifacts [15]. The operative vacuum is
around 10-6 barr.

~ 52 ~
2.3.3 Application Nanostructured Ge Layers
Ion implantation of high mass species on Germanium is known to induce a
characteristic “honeycomb” damage structure, impossible to anneal out with
conventional thermal treatments [16; 17; 18]. In the present examples Sn ion
implantation at room temperature has been performed on Ge layers. Varying
experimental parameters as implant temperature implant ion species; implant ion
dose; sample structure it is possible to control the nanostructure morphology. On
this topic is crucial to have a microscopy tool able to carried out an outstanding
characterization.
In Figure 2.3.3, a comparison between HIM and SEM images are reported. HIM
analysis have been performed with an Orion Nanofab at Peabody Zeiss Facility,
SEM analysis have been carried out by using A Jeol 7401F field emission
instrument at FBK analytical laboratories.

Figure 2.3.3. Comparison between HIM (left) image and SEM image right at the
same magnification (50.000x).

The magnification is around 50.000 x for both images. It is possible to observe a


superior contrast in HIM images. Moreover residual materials inside the
honeycombs is point out only by HIM image. The HIM larger depth of field
allows to shows a more realistic 3D structure. In Figure 2.3.4 is reported a
comparison HIM-SEM at the magnification of 250.000 x.

~ 53 ~
Figure 2.3.4 comparison between HIM (left) image and SEM image right at the
same magnification (250.000x).

In this case (Figure 2.3.4) the differences between the two techniques are more
evident. SEM stats also to show resolution problems. On the contrary HIM
image point out a terrific contrast.
In Figure 2.3.5 is reported a sample cross section performed by HIM instrument.

Figure 2.3.5. HIM cross section image (228.800x)

~ 54 ~
The resolution is comparable to TEM analysis, but the sample preparation
requires less than 10 minutes vs several hours requested by TEM. In this way a
large set of samples can be analyze in the same measurement run and without
dependence by sample preparation.
By the shoved example it is possible summarize the following strength points of
HIM in comparison with other microscopy classical techniques:
 Images show a larger depth focus
 High sensitivity on low Z mater
 Better contrast
 Higher depth resolution
 Easy sample preparation
Some drawbacks pointed out are:
 No elemental data
 Limit High resolution mode if compared with TEM
In Figure 2.3.6, a germanium layer is etched by ion milling to obtain a sample
cross section to be analyzed. The cross section at the right is obtained by Ga ions
beam, as it is possible to observe the cut face is completely modified by Ga
beam. On the contrary in the left side the cross section etched Ne ions beam
doesn’t show any modifications and the analysis can be carried out without
artifacts.

Figure 2.3.6. HIM image. Ge cut by Gallium (left) and by Neon (right).

~ 55 ~
References
[1] J. Orloff, M. Utlaut, L.W. Swanson “Interaction of Ions with Solids.” In:
High Resolution Focused Ion Beams, Kluwer Academic/Plenum, (2002).
[2] John Notte, Bill Ward, Nick Economou, Ray Hill, Randy Percival, Lou
Farkas, Shawn McVey” An Introduction to the Helium Ion Microscope” AIP
Conference Proceedings, 931(1), 489 (2007).
[3] J.Orloff, L.W. Swanson, M. Utlaut, “Fundamental limits to imaging
resolution for focused ion beams”. J. Vac. Sci.Technol B 14(6), pp. 3759-3763
(1996).
[4] D. Joy, B Griffin, “Is Microanalysis Possible in the Helium Ion
Microscope?” Microscopy and Microanalysis, 17, 643-649 (2011).
[5] S Kostinski, N Yao, “Rutherford backscattering oscillation in scanning
helium-ion microscopy”, Journal of Applied Physics, 109, 064311 (2011).
[6] D Joy, H Meyer, M Bolorizadeh, Y Lin, D Newbury, “On the Production of
X-rays by Low Energy Ion Beams”, Scanning, 29, 1-4 (2007).
[7] T. Wirtz, N. Vanhove, L. Pillatsch, D. Dowsett, S. Sijbrandij, and J. Notte.
“Towards secondary ion mass spectrometry on the helium ion microscope: An
experimental and simulation based feasibility study with He1 and
Ne1bombardment”. App. Phy. Lett. 101, 041601 (2012).
[8] N Economou, J Notte, W Thompson, “The History and Development of the
Helium Ion Microscope”, Scanning, 34(2), 83-89 (2012).
[9] https://www.zeiss.com/microscopy/int/products/multiple-ion-beam/orion-
nanofab-for-materials.html
[10] S Boden, Z Moktadir, D Bagnall, H Mizuta, H Rutt, “Focused helium ion
beam milling and deposition”, Microelectronic Engineering, 88(8), 2452-2455
(2011).
[11] Annamalai, S Mathew, V Viswanathan, C Fang, D Pickard, M Palaniapan,
“Design, fabrication and Helium Ion Microscope patterning of suspended
nanomechanical graphene structures for NEMS applications”, Solid-State
Sensors, Actuators and Microsystems Conference (2011).
[12] L Scipioni, D Ferranti, V Smentkowski, R Potyrailo, “Fabrication and
initial characterization of ultra-high aspect ratio vias in Gold using the Helium
~ 56 ~
Ion Microscope”, Journal of Vacuum Science & Technology B, 28(6), C6P18
(2010).
[13] P Alkemade, E Veldhoven, “Deposition, Milling, and Etching with a
Focused Helium Ion Beam”, Nanofabrication Book Chapter, 275-300 (2012).
[14] Fox, Y Chen, C Faulkner, H Zhang, “Nano-structuring, surface and bulk
modification with a focused helium ion beam”, Beilstein Journal of Technology,
3, 579-585 (2012).
[15] G. Hlawacek, V. Veligura, R. van Gastel and B. Poelsema. “Helium Ion
Microscopy”. JVSTB 32, 020801 (2014).
[17] Maria Secchi, Evgeny Demenev, Damiano Giubertoni, Salvatore Gennaro,
Massimo Bersani, Tiziana Del Buono, Onofrio Antonino Cacioppo, Florian
Meirer, Suyog Gupta

~ 57 ~
~ 58 ~
SECTION 3
SPECTROSCOBY
VE
SPECTROMETRY

~ 59 ~
~ 60 ~
3.1 X-RAY DIFFRACTION (XRD)
Massimo BERSANI
bersani@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
X-ray diffraction (XRD) is a versatile, non-destructive technique that reveals
detailed information about the chemical composition and crystallographic
structure of natural and manufactured materials. The technique is far to be
considered a a nano-characterization tool in fact any suitable spatial resolution at
nan level can be provided, anyway also nanoparticle can be analyzed by XRD.
When a monochromatic X-ray beam with wavelength lambda is projected onto a
crystalline material at an angle theta, diffraction occurs only when the distance
traveled by the rays reflected from successive planes differs by a complete
number n of wavelengths.
The relation is described by the following equation (Bragg law)

=2 Ɵ
Where:
 λ is the x-ray wavelength
 n is a positive integer
 d lattice distance
 θ is the scattering angle
The Bragg law was formulated by Sir W.L. Bragg in the 1922 and it is the
fundamental law of X-ray diffraction on crystalline materials.
By varying the angle theta, the Bragg's Law conditions are satisfied by different
d-spacings in polycrystalline materials. Plotting the angular positions and
intensities of the resultant diffracted peaks of radiation produces a pattern, which
is characteristic of the sample. Where a mixture of different phases is present,
the resultant diffractogram is formed by addition of the individual patterns.
By XRD it is also possible to study amorphous materials, by analyzing the
whole scattering signal.

~ 61 ~
Based on the principle of X-ray diffraction, a wealth of structural, physical and
chemical information about the material investigated can be obtained.
In the past two decades a lot of book were published on detailed theory and
fundamental an example can be Schwartz and Cohen book [1]

Figure 3.1.1 X-ray scattering between two atomic planes

X-ray diffraction uses a collimated beam of photons (usually monochromatic) to


investigate the crystalline characteristics of the materials. The interference figure
produced is in fact related to the lattice characteristics of the crystalline phases
that make up the material
The XRD instrumentation is basically formed by the following elements:

 A monochromatic X-ray source, for laboratory equipment the following


characteristics X-ray are used: Ag (0.55941 Å); Mo (0.7093 Å); Cu
(1.540598 Å) Ni(1.65791 Å); Co (1.78897 Å); Fe (1.93604 Å); Cr
(2.2897 Å)
 A rotating stage to change the incident angle of X-ray beam respect the
sample
 A detector to evaluate the X-ray diffracted signal
 An optic system to focalize the X-ray beam and to improve its
monocrromatic characteristic.
~ 62 ~
The cost of a complete commercial XRD equipment can varying from 150k€ to
over 1M€.

3.1.1 Applications
XRD is a versatile, non-destructive technique which provides detailed
information on the micro and crystallographic structure and chemical
composition of all types of synthesized as well as natural materials
The use of X-ray diffraction in materials goes from the simple identification of
the phases to the study of the degree of morphing up to the thickness
measurement of depository layers. Other applications may concern the size of
grains in polycrystals and the measurement of residual stress, in this particular
case the interplanar distance of a given crystallographic peak is used as an
atomic-scale extensometer.
A typical example of XRD spectra is reported in Figure 3.1.2.

Figure 3.1.2. Zincite spectra, the peaks are due to the constructive interference of
X-ray reflected by the crystalline plane of the samples.

By the peak positions is possible to obtain the geometry of the crystalline cell
and so the identification of crystalline phases present in the samples.

~ 63 ~
Peaks intensity allows to determinate atoms and their position in the cell and
preferential orientation.
Peaks shape and asymmetry contain the information about average size of
crystalline domains - crystalline defects (dislocations, stacking faults, etc.)
Peak shift by sample rotation is connected to sample residual stress.
Standard field of applications are:
 Environmental analysis: rocks, soils, clays, minerals, fine powders, free
silica, asbestos and fibers in general
 Cement, oil, glass, textile, electronics, nuclear industry
 Studies on catalysts
 Polymers, explosives, ceramic materials and new materials
 Agricultural, biological and chemical sciences
 Pharmaceutical and cosmetics
 Forensic Sciences
 Archeology, archeometry, art ...
Recently the use of XRD on nanomaterial has been developed and specific
applications have been published [2; 3]. The XRD can be particularly useful to
analyze standalone nano-powders or dispersed in different matrices like
polymer, liquid and biomaterials [4]. Also nano structure of low crystallinity
carbon materials were analyzed by using high energy X-ray diffraction
crystalline forms chemical state [5]. crystalline/amorphous ratio are in fact very
important also for nano-powder characterization. For example nanomaterial as
tetrapod (as reported in the figure 3 and similar structure are commonly analyzed
by XRD. Anyway a large amount of material is required and any lateral
resolution at nan level can be provided.

~ 64 ~
Figure 3.1.3 ZnO tetrapod nanocrystals. Image obtained by He Ion Beam
Microscopy.

Moreover further X ray scattering techniques have been developed as to analyses


nano layers a complete overview can be consulted in the reported reference [6].

Özet
XRD yöntemi bir toplu analiz cihazı olarak düşünülebilir ve başta toz formu
olmak üzere nanomalzemelerin karakterizasyonu için kullanılabilecek bir
yöntemdir. Ortalama bilgiler elde edilip diğer analiz sonuçları ile
ilişkilendirilebilir. XRD’nin temel avantajı tahribatsız olması, invazif olmaması
ve numune hazırlığında vakum gerektirmediği için kısıtlılığı bulunmamasıdır.

~ 65 ~
References
[1] L.H. Schwartz e J.B. Cohen Diffraction from Materials, Springers, Berlin
1987.
[2] Dillip Kumar Characterization of Nanomaterials by X-Ray Diffraction,
McGraw-Hill Professional, Bombay, 2010.
[3] ValeriPetkov Nanostructure by high-energy X-ray diffraction, Volume 11,
Issue 11, November 2008, Pages 28-38.
[4] M. Gateshki, V. Petkov, G. Williams, S. K. Pradhan, and Y. Ren Phys. Rev.
B 71, 224107 –2005
[5] Maheshwar Sharon Carbon Nano Forms and Applications, McGraw-Hill,
New York, 2010
[6] G. Friedbarcher, H. Bubert, Surface and Thin Film Analysis, Wiley,
Singapore, 2011

~ 66 ~
3.2 X-RAY FLUORESCENCE ANALYSIS

Giancarlo PEPPONI
pepponi@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
3.2.1 X-ray interaction with matter
X-Rays are electromagnetic radiation with photon energies approximately in the
range 100eV to 100keV. Considering the Planck-Einstein relation (E=hf, where
h is Planck constant ~ 4.136 x 10-15 eV·s, and f the frequency) frequencies are in
the 1016-1020 Hz range; considering that for electromagnetic radiation, frequency
and wavelength are inversely proportional with the speed of light c as constant
of proportionality (f=c/λ), the wavelength is related to energy with the relation
(λ=hc/E, λ[nm]=1.2398/E[eV]). Hence x-rays have wavelengths in the range
102-10-2 nm.
In the x-ray range the interaction of electromagnetic radiation involves three
phenomena: photoelectric absorption, elastic scattering, inelastic (Compton)
scattering. The probability of these phenomena is described by the atomic total
cross sections. All three contribute to a macroscopic effect of attenuation of the
radiation when passing through matter. Such attenuation is described by Beer-
Lambert law, I(x)=I0e-μx, where I0 is the intensity of a parallel collimated
beam, I(x) is the intensity of the beam after travelling a path of length x in a
certain material with a total linear absorption coefficient μ. For a pure element
material, the linear absorption coefficient μ is related to the atomic total cross
sections by the relation μ = σ·ρ / u A, where σ is the total cross section, ρ is
the density of the material, u (= 1.661 × 10-24 g) is the atomic mass unit, A is
Avogadro’s number. The photoelectric cross section is typically indicated with
τ.
For a compound with density ρC the absorption coefficients μC are given by the
contribution of the single elements as follows:

~ 67 ~
X-Ray elastic scattering (the scattered photon has the same energy as the
primary photon) is mainly due the interaction with tightly bound electrons and
hence it dominates over inelastic scattering at low photon energies and for high
Z materials. The elastic scattering differential cross section is higher in the
forward direction with a little component of backscattering. Inelastic X-ray
scattering is due the Compton effect and involves the emission of an electron
and the scattered photon has an energy given by the difference of the energy of
the primary photon and the kinetic energy of the emitted electron and it is given

by:

Figure 3.2.1. a) Graphical representation of Beer-Lambert attenuation slaw. b)


Mass attenuation coefficients for gold in the X-Ray region.

The inelastic scattering differential cross section is higher at 180 degrees in the
back-scattering direction and diminishes to 0 in the forward scattering direction.
In a photoelectric absorption process, a photon is absorbed, and an electron is
emitted leaving a vacancy. In the X-Ray region, core electrons are emitted and
the excited atoms (unstable due to the vacancy) are called X-Ray levels. The
discontinuities in the photoelectric absorption coefficient (absorption edges) are
given by the contribution to the absorption of a specific excited state of the atom

~ 68 ~
given by a vacancy. When the photon energy reaches the binding energy of a
certain shell of electrons the absorption probability (with emission of a
photoelectron) increases abruptly and then as the photon energy increases it
decays as approximately described by Bragg Pierce law: τ ∝ Z3 / E8/3. Binding
energies are characteristic for every element and their values are tabulated and
available from x-ray libraries. X-Ray levels are designated with K, L1, L2, L3,
M1, M2, M3, M4, M5, …, and correspond to the electron configurations 1s-1, 2s-
1
, 2p1/2-1, 2p3/2-1, 3s-1, 3p1/2-1, 3p3/2-1, 3d3/2-1, 3d5/2-1, …, respectively. The energy of
the X-ray Level corresponds to the binding energy of electrons in that shell. X-
Ray levels have an energy spread following a Lorentzian distribution with a
width indicated by Γ.

Figure 3.2.2. a) Elastic and inelastic differential cross section for different
elements and 10 keV incident energy.

X-Ray levels only exist for a very short time (in the femtosecond range) and
then atoms relax to lower energy states; the excited atom has two paths for
deexcitation: transition of an electron from an upper shell, or Auger effect. In the
first case, a characteristic photon is emitted leaving the atom in a less energetic
state with a vacancy in a lower energy shell; in the second case, an Auger
electron is emitted; the Auger effect is a three-level process with an electron in
the continuum and two vacancies in lower energy shells. The physical parameter
giving the probability of relaxation through the emission of an X-ray photon is
called fluorescence yield, typically indicated with ω. Figure 3.2.3 reports a plot

~ 69 ~
of the fluorescence yields of the elements for the K, L1, L2, L3 shells. The pro-
probability of an Auger process is hence 1 minus the fluorescence yield. When,
in an Auger event, the two upper levels are from the same main shell (same
principal quantum number) as in a K-L1L3 Auger event, this is called a Coster-
Kronig transition. This is important since it redistributes vacancies in a shell, and
this affects the relative intensity of X-ray lines.

Figure 3.2.3. Plot of the fluorescence yield of the elements versus the atomic
number.

The energy of the emitted x-ray photon is given by the difference of the two
energy levels involved in the electronic transitions. Due to the energy spread of
x-ray levels, x-ray lines have a natural line-width [1] given by the sum of the
initial and final level widths and follow themselves a Lorentzian distribution, as

given by the following:


For one core X-ray level, there are several radiative transitions possible
involving different upper X-Ray levels, each one with a given probability, and
all probabilities adding up to 1 for a given core X-Ray level. This gives rise to so
called X-Ray families: if a K-L3 line is visible on the spectrum, a K-L2 line
must also be present and with a well-defined intensity ratio. Such intensity ratios
are described with a probability p, associated to any transition. All transitions
associated to a certain core level must clearly sum up to 1.

~ 70 ~
The lines in X-ray emission spectra are often indicated with a nomenclature de-
veloped by M. Siegbahn in the 1920’s, based on the relative intensity of X-ray
lines. The International Union of Pure and Applied Chemistry (IUPAC)
recommends instead a nomenclature reporting the initial and final X-ray levels
e.g. K-L3 [2]. The correspondence between the Siegbahn notation and the
IUPAC notation, is reported in Table 3.2.1.
Table 3.2.1. Correspondence between Siegbahn and IUPAC notation for X-Ray
Emission spectra.
Siegbahn IUPAC Siegbahn IUPAC Siegbahn IUPAC Siegbahn IUPAC
Kα1 K-L3 Lα1 L3-M5 Lγ1 L2-N4 Mα1 M5-N7
Kα2 K-L2 Lα2 L3-M4 Lγ2 L1-N2 Mα2 M5-N6
Kβ1 K-M3 Lβ1 L2-M4 Lγ3 L1-N3 Mβ M4-N6
KIβ2 K-N3 Lβ2 L3-N5 Lγ4' L1-O2 Mζ M4,5-
N2,3
KIIβ2 K-N2 Lβ3 L1-M3 Lγ4 L1-O3 Mγ M3-N5
Kβ3 K-M2 Lβ4 L1-M2 Lγ5 L2-N1

Figure 3.2.4. a) Bismuth L transitions represented with Lorentzians with the


natural linewidth in log scale.

As schematically drawn in Figure 5, let us consider a parallel monochromatic X-


ray beam of intensity I0 and photon energy E0 impinging on the surface of a
sample at an angle ϕ0 and the fluorescence emitted at an angle ϕ 1; let us
consider the intensity of emission related to the j core level and the k upper level
for element i contained with a weight fraction wi in the sample material having a

~ 71 ~
density ρs; let us consider the intensity of the emission from a layer of thickness
dz at a depth z in a solid angle Ω1; we have:

Figure 3.2.5. Schematic representation of the a dz thick layer at depth z for


which fluorescence is calculated.

If we are analysing a thin layer the dz in the above equation can be directly
replaced with the layer thickness and the intensity id directly proportional to the
thickness. It is worth noting that thickness and density are directly correlated in
XRF; hence in the analysis of thin layers only the product of density and
thickness can be determined, that is mass of an element per unit surface (g/cm2).
The above equation, the Sherman equation [3] for an infinitely thin layer, is the
basis for quantitative analysis and X-Ray fluorescence intensity modelling. By
integration over the thickness T the total emitted fluorescence intensity is
obtained.

3.2.2 X-Ray Fluorescence Analysis


X-Ray Fluorescence (XRF) analysis is a technique devoted to qualitative and
quantitative elemental analysis. It is based on x-ray emission spectrometry and
the excitation of the sample with x-rays.
The technique originated from Moseley’s work. He used Bragg’s diffraction to
measure the wavelength of characteristic radiation emitted from the elements
and derived the Moseley’s law showing a relation between the atomic number
and the frequency of the emitted photons (Figure 3.2.6.a), and thus allowing the
~ 72 ~
identification of the elements. As mentioned above, at the microscopic level, the
emission of characteristic X-rays is associated with electron transitions between
two x-ray levels, that is excited configurations of the atom with a vacancy
(Figure 3.2.6.b).

Figure 6 a) Graphical representation of Moseley’s Law for major K and L lines.


b) Microscopic (atomic) representation of the two steps involved in x-ray
fluorescence emission: 1) a primary x-ray photon is absorbed by photoelectric
effect leaving a vacancy in a core-shell, an x-ray photon is emitted due to an
electronic transition filling the vacancy. The energy of the emitted photon is
given by the difference of the energy of the two levels involved.

To have a transition, a core-level vacancy is needed; in XRF the vacancy is


created by photoelectric effect. Other elemental x-ray spectrometric techniques
based on x-ray emission have developed exploiting the generation of vacancies
by different particles. Electron impact is used by Electron Probe Micro-Analysis
(EPMA) and X-Ray Spectroscopy in a Scanning Electron Microscope (SEM) or
a Transmission Electron Microscope (TEM); core shell ionization by ions
bombardment is used in Particle Induced X-Ray Emission (PIXE). XRF has the
advantage of the availability of compact and low-cost X-Ray sources and no
need of ultra-high vacuum parts.
XRF is also not well suited for high resolution imaging because focussing X-
Rays is much more demanding than focussing visible light. Recent
developments in X-Ray optics and the use of synchrotron radiation sources
allow the achievement of focussed X-Ray beams in the tens of nanometre range.
As shown in Figure 3.2.6.a light elements only have characteristic lines at low
energies which are easily absorbed by any medium (including atmosphere, x-ray
~ 73 ~
tube exit window, detector window); the behaviour of the photoelectric coeffici-
coefficient indicate by the Bragg-Pierce law is very unfavourable for light
elements; the fluorescence yield decreases dramatically for the low-Z elements
(e.g. C K fluorescence yield is less than 0.003); all these factors contribute to the
scarce sensitivity of XRF for light elements.
Qualitative elemental analysis is straight forward. The energy and relative
probability of X-Ray transitions are tabulated and can be consulted to recognise
a transition in the spectrum. As explained in the above paragraph, all transitions
related to a level must be present with intensities related to the relative
transitions. Figure 6 shows the spectrum of multielement standard sample
deposited onto a silicon wafer. For each element which wad identified all
transitions are indicated with the same colour and the element symbol is
indicated near the most intense line.

Figure 3.2.7 EDXRF spectrum of a multielement standard deposited onto silicon

Quantitative analysis is carried out either by empirical calibration methods with


reference samples specific for the application or by fundamental parameters
methods where the concentration of the element is derived by comparison and
matching of experimentally determined intensities and intensities calculated with
Sherman equation for a sample model which is recursively refined.
X-Ray Fluorescence analysis is in general a bulk technique because the
information depth of the analysis goes well beyond the near surface region.
Surface sensitivity can however be obtained for flat smooth interfaces as
~ 74 ~
explained in the following section with a technique called Grazing Incidence X-
Ray Fluorescence Analysis (GI-XRF). A very thorough description of the
theoretical and experimental background of XRF can be found in the Handbook
of X-Ray Spectrometry [4] and the Handbook of Practical X-Ray Fluorescence
Analysis [5].

3.2.3 Total reflection XRF, Grazing Incidence-XRF, X-Ray


Reflectivity
In the X-ray range the refractive index, n, of materials is less than 1 and it is
directly related to the forward scattering factor f = f1+if2 which is complex due to

absorption.

In the above equation, N is the number of atoms per unit volume, r0 is the
classical electron radius, λ the wavelength of the primary radiation.

The index of refraction is typically written as n =1-δ-iβ. δ, called the


decrement, is typically of the order of 10-5 and β, the imaginary component, is
due to absorption, and it is generally about 2 orders of magnitude smaller.
Let us write Snell’s Law of reflection and refraction for an x-ray beam coming
from the atmosphere and impinging on a solid surface and measuring the
incident angle ϕ0 and the refraction angle ϕ1 from the surface as depicted in

Figure 3.2.8.
To comply with an index of refraction less than 1 the cos ϕ1 must be bigger than
cos ϕ0 which means ϕ1<ϕ0. Let us call ϕC, the critical angle for total reflection,

the incident angle for which ϕ1=0 cos ϕ1=1. Ignoring the absorption
and

term β it is 2 orders of magnitude less than δ we can write and


developing the cosine in a Taylor series and ignoring the all terms from the
square upwards we have .

~ 75 ~
Figure 3.2.8. Graphical representation of Snell’s Law for the X-Ray region,
where the index of refraction is less than 1

What written above, practically, means that if we have a collimated parallel X-


Ray beam impinging on a flat smooth surface at an angle below ϕC total
external reflection occurs.
The reflected and refracted intensities may be calculated as in the optical
wavelength range using Fresnel’s formalism. The calculation shows that even
below the critical angle for total reflection an exponentially decreasing electric
field (an evanescent wave) is present below the surface.
In 1971 Y. Yoneda and T. Horiuchi [6] suggested that the phenomenon of total
external reflection could be used for elemental analysis. H. Aiginger and P.
Wobrauschek [7] followed the suggestion, implemented it, and gave birth to
Total Reflection X-Ray Fluorescence Analysis technique (TXRF) [8, 9, 10].
TXRF, due to the total reflection phenomenon, is very sensitive to the near
surface region and applies to both the study of surface contaminations and to the
study of small amounts of samples deposited on flat smooth substrates used as
sample carriers. TXRF finds extensive use in the microelectronic industry for he
quality control of silicon wafer surfaces.
The methodology is regulated by two ISO standards: ISO 17331:2004 Surface
chemical analysis — Chemical methods for the collection of elements from the
surface of silicon-wafer working reference materials and their determination by
total-reflection X-ray fluorescence (TXRF) spectroscopy, ISO 14706:2014
Surface chemical analysis — Determination of surface elemental contamination
on silicon wafers by total-reflection X-ray fluorescence (TXRF) spectroscopy.
TXRF is also applied to the quantification of trace elements in liquids,
suspensions or particulate matter. Recently also for this kind of applications a
standard has been published: ISO/TS 18507:2015 (VAMAS) Surface chemical
analysis -- Use of Total Reflection X-ray Fluorescence spectroscopy in
biological and environmental analysis.

~ 76 ~
For a multi-layered sample the reflected intensity is affected by reflections of the
deeper layers and their interference. A recursive method for the calculation of
the reflected intensity by a multilayer sample has been provided by Parratt in
1954 [11], giving birth to the X-Ray Reflectivity (XRR) analytical technique. In
his derivation he also derives the intensity of the electric field at each interface.
As shown by de Boer [12], the primary X-Ray field intensity propagating in the
layers can be calculated and hence Sherman equation modified to calculate
fluorescence intensities in layered materials and in the grazing incidence angular
region. These are the fundamentals for GI-XRF. GI-XRF and XRR are closely
related; the two techniques however give different complementary information.
XRR is a scattering technique and it is sensitive to changes in the electron
density distribution in depth (the scattering centres are the electrons), and hence
to interfaces, the layer thickness and the roughness. GI-XRF gives information
on elemental depth distributions and elemental mass coverage (atoms/cm2). This
means that in GI-XRF thickness and density are strongly correlated: a thinner
layer with a greater density cannot be distinguished from a thicker layer with a
lower density.

3.2.4 Instrumentation
The wave-particle duality in the context of the x-rays originates two families of
XRF: Energy Dispersive XRF (ED-XRF) where detectors are employed that
measure the energy of an X-Ray photon and Wavelength Dispersive XRF (WD-
XRF) where photons with a determined wavelength are selected by Bragg
diffraction and are then only counted by the x-ray sensor. Energy dispersive x-
ray detectors typically exploit direct conversion of the photon energy into an
electric signal in a semiconductor. The magnitude of the electric signal is then
converted in X-Ray energy by a spectral calibration with known X-Ray lines.
Wavelength dispersive XRF is mainly used in industrial applications where few
predetermined elements need to be monitored or in cases where a high energy
resolution is needed, for example when dealing with overlapping x-ray lines or
emission spectroscopy studies aimed at the speciation of the element
investigated. Energy dispersive X-Ray detection is typically preferred when
analysis if unknown samples is involved, when cost and space are limiting
factors, e.g. for portable instruments.

~ 77 ~
XRF can be used
TXRF and GI-XRF instrumentation typically exploits energy dispersive
acquisition, due to the large solid angle of detection that can be obtained with a
small form factor detector and the geometry of the set-ups. Moreover, the
development of high count rate, good resolution Peltier-cooled energy dispersive
detectors, mainly due to the silicon drift detector (SDD) technology, has boosted
their exploitation.
Dedicated TXRF instrumentation for the quality control of Silicon wafer
surfaces is available on the market, with automatic handling and alerts if high
contamination levels are found. Analytical equipment manufacturers also offer
dedicated benchtop instruments for elemental analysis of liquids by TXRF, with
automatic sample changer designed to work with a variety of sample carrier
substrates (quartz discs, glass slides, 2-inch wafers).
Non-commercial dedicated instrumentation has also been built for laboratory use
[13] and specifically for synchrotron radiation sources [14].
To avoid background due to scattered radiation, TXRF analysis is typically
performed with instruments equipped with a monochromator. Since high energy
resolution in the primary beam is not needed, multilayer mirrors are typically
used as monochromators.
No specifically designed commercial GIXRF instrumentation has been
developed. Figure 3.2.9 represents a coloured sketch of a combined XRD/XRF,
XRR/GIXRF instrument developed at FBK in collaboration with local
manufacturers.

~ 78 ~
Figure 3.2.9. Coloured sketch of a theta-theta combined XRD/EDXRF, XRR/GI-
XRF instrument developed at Fondazione Bruno Kessler.

3.2.5 Application Cases


Contamination of semiconductor surfaces
A typical application of TXRF is related to the analysis of silicon wafer surfaces
for the quality control of metal contaminants. Crystalline silicon has an atomic
density of about 5.0x1022 atoms / cm3 and hence a 1nm thick layer contains
about 5.0x1015 atoms/cm2. For direct TXRF, typical detection limits are in the
109 – 1012 at/cm2 range, depending on the instrument (excitation energy, power
of the tube, detector size). The area of the analysis (lateral resolution) is typically
defined by the detector collimator, defining the detection solid angle. The
illuminated size in the direction of the exciting beam is typically very large due
to the low angle of the incidence of the primary beam. Let us consider a typical
beam with a rectangular cross section with dimensions 8mm x 100 μm (width x
height). The 100 μm height projected onto the sample at 0.08 degrees results in
over 70 mm sample length illumination. A typical detector with an active area of
50 mm2 (~ 8 mm diameter) limits the detected area. Considering the large size of
silicon wafers used in modern microelectronic fabrication this is a reasonable
spot size for blank or not patterned processed wafers [15].

~ 79 ~
As an example, Figure 3.2.10 reports the spectrum of a bare silicon wafer
intentionally contaminated with a fingerprint. A 1 μl droplet of a 10 ppm Ga
standard solution (10ng Ga) was deposited onto the wafer in the middle of the
contaminated region and dried in vacuum. The spectrum was modelled with the
GIMPy software [16] using a layer model for the fingerprint and a residue model
for the Ga. Th level of contamination estimated is shown in Table 3.2.2.

Figure 3.2.10. Measured and fitted spectrum of an intentional contamination of a


Silicon wafer by a fingerprint with the addition of a 1 μl droplet of a 10 ppm Ga
standard solution (10 ng Ga).

Table 3.2.2. Contamination estimated for the intentional fingerprint


contamination of a silicon wafer.
Element Surface concentration
[symbol] [at/cm2]
Al 5.8 1013
S 1.7 1013
Cl 2.9 1013
K 2.0 1014
Ca 3.9 1013
Cr 2.3 1012
Mn 1.9 1011
Fe 2.3 1012
Ni 2.3 1012
~ 80 ~
Cu 7.7 1011
Zn 1.9 1012
Br 1.3 1011
Rb 2.7 1011
Sr 2.3 1010
Pb 1.9 1011

Thin layer analysis


As explained in the above paragraphs, thin layers analysis can be performed with
XRF and for thin layers the intensity is directly proportional to the layer
thickness and density but the dependence from the two in not easily
disentangled. Moreover, if the thin layer is deposited onto a thick substrate the
signal from the substrate might dominate the analysis.
The sensitivity of XRF can be limited to the near surface region by adopting a
grazing incidence geometry. In this case the combination of GIXRF and XRR
enables a much more accurate independent determination of thickness and
density and elemental concentration as shown by D. Ingerle et al. [17], who
presented a dedicated software for such studies and B. Caby et al. who applied
the method to indium oxide and silver sandwich layer system [18].
In Figure 3.2.11 the measured X-Ray reflectivity curve and calculated curves
deriving from a 1 layer and a 2 layer models are shown [16]. The sample
nominally is constituted by a 10 nm thick Al2O3 layer deposited onto a silicon
wafer. The sample was modelled including a silicon substrate a silicon dioxide
layer (native oxide) and then the Al2O3 layer. The best fit with a single Al2O3
layer was obtained with a 9.95 nm thickness and 3.42 g/cm3 density.
The best fit with two Al2O3 layers was obtained with 0.84 nm thickness, 1.23
g/cm3 density for the topmost one and 9.4 nm thickness, 3.49 g/cm3 density for
the deeper one.

~ 81 ~
Figure 3.2.11. X-Ray Reflectivity curve
Both XRR and GIXRF are well known to offer multiple models fitting the
experimental curve within the measurement uncertainties. Increasing the number
of layers and the complexity of the model better fits are typically obtained
(overfitting) but not always providing a more accurate description of the real
sample. Knowledge of the sample and possible physical phenomena influencing
the sample structure are necessary to restrict the set of possible solutions.

Ultra-shallow arsenic implants for junction scaling


In the technological advances of CMOS technology, the shrinking of lateral
dimensions of the transistors has requested also the scaling of the junction depth
to limit short channel effects [19]. The controlled doping of ultra-shallow layers
is challenging for several reasons. The request for shallower junctions is
typically accompanied by the request of an increase of the total fluence to
maintain acceptable resistance values. Low-energy high current ion implantation
is challenging for the ion selection, transport, focussing. After implantation a
thermal treatment is necessary to activate the dopants (the dopants are active
only if the y substitute silicon in the crystal lattice). At high temperature,
diffusivity of the dopants increases, and annealing schemes must be developed
to keep diffusion under control. When the dopant concentration approaches the
solid solubility, the activation is hindered by clustering of the dopants and
formation of unwanted phases. To consider and keep under control all the above-
mentioned aspects, analytical techniques also evolved and developed. Secondary

~ 82 ~
Ion Mass Spectrometry (SIMS) is the most widely used technique in this appli-
application due to its sensitivity and ultimate depth resolution.
Arsenic and phosphorous are the most common n-type dopants in silicon. Due to
its higher mass (which implies less penetration) and lower diffusion issues,
arsenic is the dopant of choice for the shallow implants. Grazing Incidence XRF
has been largely exploited for the study of ion implanted silicon wafer surfaces
and in particular for arsenic doping, due to the high sensitivity for arsenic in
typical laboratory instruments.
In early works, simplified sample models were implemented, where the
reflection and refraction were calculated for a pure silicon surface and the
fluorescence from incorporated trace levels of arsenic were calculated [20]. With
increasing doping levels this approximation was no longer allowed and a layered
structure with optical properties taking into account the arsenic concentration
had to be implemented. The doped sample is then considered as a stack of
different layers and the treatment is the same as what mentioned in the previous
paragraph. In this case also the importance of combining GIXRF and XRR was
highlighted [21].
Figure 3.2.12 reports the results of the analysis of a silicon wafer doped by ion
implantation of arsenic. Experimental SiKα and AsKα intensity vs incident angle
as well as reflectivity are shown. Figure 3.2.13 reports the depth profiles used to
calculate the fluorescence curves. shown in Figure 3.2.12. Calculations and
Figures have been performed with the GIMPy software [16].

~ 83 ~
Figure 3.2.12. Experimental and fitted SiKα (A) AsKα (B) intensities and reflec-
tivity. The blue dots represent the experimental intensities. The green red and
cyan curves represent the calculated intensities assuming respectively the SIMS
determined depth profiles, a modified SIMS depth profile and a ‘beta’ distribu-
tion profile. The profiles are presented in Figure 13.

Figure 3.2.13. Depth profiles used to calculate the GIXRF intensities shown in
Figure 12.

~ 84 ~
Conclusions
XRF is a very versatile technique for elemental analysis. Qualitative analysis is
straightforward and very reliable. Precise and reproducible quantitative requires
accurate calibration of the instrument. Depending on the software the application
and the methodology, quantification is either obtained by empirical methods and
specific reference standards similar to the unknown sample, or by ab initio
calculation of the fluorescence intensities and iterative comparison with the
measured spectra. XRF is typically a bulk technique, Thin layers can be
analysed but if they are deposited on a substrate its influence might be
detrimental to the measurement.
Surface sensitivity can be gained by for flat smooth samples decreasing the
angle of incidence and performing total reflection X-Ray Fluorescence Analysis
and Grazing Incidence X-Ray Fluorescence analysis. Tin this case the
combination of GIXRF and XRR also offers much higher accuracy in the layer
thickness and density determination.
Typically, XRF instrumentation does not offer high lateral resolution
measurement capabilities. However, recent technological in high brilliance X-
Ray sources and optics allow the focussing of X-Ray beams down to tens of
nanometres for synchrotron radiation sources and in the micrometre range for
laboratory sources.

~ 85 ~
References
[1] R. Jenkins, R. Manne, R. Robin, C. and Senemaud, (1991), IUPAC—
nomenclature system for x-ray spectroscopy. X-Ray Spectrom., 20: 149–155.
doi:10.1002/xrs.1300200308
[2] S. Croft, R. D. McElroy Jr., A. Nicholson, and T. Guzzard, On the
Relationship between the Natural Line Width and Lifetime of X-Ray
Transitions, Proceedings of the INMM Conference 2015, download link:
https://www.osti.gov/scitech/servlets/purl/1214016
[3] J. Sherman, “The theoretical derivation of fluorescent X-ray intensities from
mixtures,” Spectrochimica Acta, no. 7, pp. 283–306, 1955.
[4] Handbook of X-ray Spectrometry, 2nd ed., Marcel Dekker, Inc. New York
and Basel, 2002, ISBN 0-8247-0600-5.
[5] Handbook of Practical X-Ray Fluorescence Analysis, Springer Verlag Berlin
Heidelberg, 2006, ISBN 978-3-540-36722-2
[6] Y. Yoneda and T. Horiuchi, Rev. Sci. Instr., 42, 1069 (1971)
[7] H. Aiginger and P. Wobrauschek, Nucl. Instrum. Methods, 114, 157 (1974)
[8] Total-Reflection X-Ray Fluorescence Analysis and Related Methods , John
Wiley & Sons, Inc., 2014, ISBN: 9781118985953
[9] C. Streli, P. Wobrauschek, F. Meirer and G. Pepponi, Synchrotron radiation
induced TXRF, J. Anal. At. Spectrom., 2008, 23, 792-798, DOI:
10.1039/b719508g
[10] F. Meirer, A. Singh, G. Pepponi, C. Streli, T. Homma and P. Pianetta,
Synchrotron Radiation induced Total Reflection X-Ray Fluorescence Analysis,
TrAC Trends in Analytical Chemistry, 29(6), 479-496, 2010
[11] L. G. Parratt, “Surface studies of solids by total reflection of x-rays,” Phys.
Rev., vol. 95, pp. 359–369, Jul 1954.
[12] D. de Boer, “Glancing-incidence x-ray fluorescence of layered materials,”
Phys. Rev. B, vol. 44, pp. 498–511, July 1991.
[13] C. Streli, P. Wobrauschek, G. Pepponi and N. Zoeger, A new total
reflection X-ray fluorescence vacuum chamber with sample changer analysis
using a silicon drift detector for chemical analysis, Spectrochimica Acta Part B:
Atomic Spectroscopy, Volume 59, Issue 8, 2004, Pages 1199-1203

~ 86 ~
[14] C. Streli et al., A new SR-TXRF vacuum chamber for ultra-trace analysis at
HASYLAB, Beamline L, X-Ray Spectrom. 2005; 34: 451–455, DOI:
10.1002/xrs.861
[15] F. Meirer, C. Streli, G. Pepponi, P. Wobrauschek, M. A. Zaitz, C. Horntrich
and G. Falkenberg, Feasibility study of SR-TXRF-XANES analysis for iron
contaminations on a silicon wafer surface. Surf. Interface Anal., 40, 1571–1576,
(2008) doi:10.1002/sia.2954
[16] F. Brigidi, and G. Pepponi, GIMPy: a software for the simulation of X-ray
fluorescence and reflectivity of layered materials. X-Ray Spectrom., 46, 116–
122 (2017) doi: 10.1002/xrs.2746.
[17] D. Ingerle, G. Pepponi, F. Meirer, P. Wobrauschek, C. Streli, JGIXA — A
software package for the calculation and fitting of grazing incidence X-ray
fluorescence and X-ray reflectivity data for the characterization of nanometer-
layers and ultra-shallow-implants, In Spectrochimica Acta Part B: Atomic
Spectroscopy, Volume 118, 2016, Pages 20-28, ISSN 0584-8547,
https://doi.org/10.1016/j.sab.2016.02.010.
[18] B. Caby, F. Brigidi, D. Ingerle, E. Nolot, G. Pepponi, C. Streli, L.
Lutterotti, A. André, G. Rodriguez, P. Gergaud, M. Morales, and D. Chateigner,
“Study of annealing-induced interdiffusion in in2o3/ag/in2o3 structures by a
combined x-ray reflectivity and grazing incidence x-ray fluorescence analysis,”
Spectrochimica Acta Part B: Atomic Spectroscopy, vol. 113, pp. 132–137, 2015.
[19] L. D. Yau, Solid-State Electronics, vol. 17, pp. 1059, 1974 (Short Channel
Effects: The Need for Junction Scaling)
[20] G. Pepponi, C. Streli, P. Wobrauschek, N. Zoeger, K. Luening, P. Pianetta,
D. Giubertoni, M. Barozzi, and M. Bersani, Non destructive dose determination
and depth profiling of arsenic ultrashallow junctions with total reflection x-ray
fluorescence analysis compared to dynamic secondary ion mass spectrometry,
Spectrochimica Acta Part B: Atomic Spectroscopy, 59(8):1243–1249, 2004.
[21] D. Ingerle, F. Meirer, G. Pepponi, E. Demenev, D. Giubertoni, P.
Wobrauschek, C. Streli, Combined evaluation of grazing incidence X-ray
fluorescence and X-ray reflectivity data for improved profiling of ultra-shallow
depth distributions, In Spectrochimica Acta Part B: Atomic Spectroscopy,
Volume 99, 2014, Pages 121-128, ISSN 0584-8547,
https://doi.org/10.1016/j.sab.2014.06.019.

~ 87 ~
3.3 X-RAY PHOTOELECTRON SPECTROSCOPY (XPS)

Lia Emanuela VANZETTI


vanzetti@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
The emission of electrons from a metal surface under ultraviolet photon
irradiation, known as the photoelectric effect, was discovered by Hertz in 1887,
and explained by Einstein in 1905 (Nobel Prize for Physics in 1921)[1].
Subsequently the photon energy was extended in the X-ray regime, leading to
the beginning of XPS [2]. After World War II, the technique was revived at
Lehigh University, developing the concept of X-ray Photoelectron Spectroscopy
(XPS) as an analytical tool [3]. The real breakthrough, though, arrived in the
1950s and 60s.
XPS was developed by Kai Siegbahn and his research group at the University of
Uppsala, Sweden. The technique was first known by the acronym ESCA
(Electron Spectroscopy for Chemical Analysis). The advent of commercial
manufacturing of surface analysis equipment in the early 1970s enabled the
placement of equipment in laboratories all over the world. In 1981, Siegbahn
was awarded the Nobel Prize for Physics for his work with XPS.
Surface analysis by XPS involves irradiating a solid in vacuo with
monoenergetic soft x-rays and analysing the emitted electrons by energy. The
spectrum is obtained as a plot of the number of detected electrons per energy
interval versus their kinetic energy. Each element has a unique spectrum. The
spectrum from a mixture of elements is approximately the sum of the peaks of
the individual constituents. Because the mean free path of electrons in solids is
very small, the detected electrons originate from only the top few atomic layers,
making XPS a unique surface-sensitive technique for chemical analysis. It is
possible to change the surface sensitivity tilting the sample with respect to the
entrance of the electron analyser.
Quantitative data can be obtained from peak heights or peak areas, and
identification of chemical states often can be made from exact measurement of
peak positions and separations, as well as from certain spectral features.

~ 88 ~
Nowadays many XPS systems have the capability to obtain data from small
areas and to acquire maps which show both elemental and chemical state
information.

3.3.1 Principle of The Technique and Instrumentation


Surface analysis by XPS is performed by irradiating a sample with
monoenergetic soft X-rays and analysing the energy of the detected electrons.
Mg K (1253.6 eV), Al K, or monochromatic Al K (1486.6 eV) X-rays are
the photon sources most commonly used in research laboratories. The photon
penetration depth in a solid is of the order of 1-10 µm. The photons interact with
atoms in the surface region, causing electrons to be emitted by the photoelectric
effect. The emitted electrons have measured kinetic energies given by:

KE = h – BE - S (1)
where h is the photon energy, BE is the binding energy of the atomic orbital
from which the electron originates, and S is the spectrometer work function.
The binding energy may be regarded as the energy difference between the initial
and final states after the photoelectron has left the atom. Because there is a
variety of possible final states of the ions from each type of atom, there is a
corresponding variety of kinetic energies of the emitted electrons. Moreover,
there is a different probability or cross-section for each final state. The Fermi
level corresponds to the zero binding energy (by definition). The line lengths
indicate the relative probabilities of the various ionization processes. The p, d
and f levels - split upon ionization, leading to vacancies in the p1/2, p3/2, d3/2, d5/2,
f5/2 and f7/2 orbitals. The spin-orbit split ratio is 1:2 for p levels, 2:3 for d levels
and 3:4 for f levels.
Because each element has a unique set of binding energies, XPS can be used to
identify and determine the concentration of the elements on the surface.
Variations in the elemental binding energies (the chemical shifts) arise from
differences in the chemical potential and polarizability of compounds. These
chemical shifts can be used to identify the chemical state of the materials being
analysed.
Because the mean free path of electrons in solids is very small, the detected
electrons originate from only the top few atomic layers. In Figure 3.3.1 the
dependence of  on the kinetic energy is shown for different materials and
compounds.
~ 89 ~
AuAu

Al
Al
WO3
Ag
Fe3O4 Mo

Escape Depth (nm)


Au

Cs W
Sn Sn
Be C
Al

Ag Be
1 Ag C
Be

Ag
Cu Mo C Mo
Cu
Al
Ag W
Ag Fe Fe3O4 Al
Fe Fe
MoBe FeFe
Ni
Au

10 100 1000

Kinetic Energy (eV)


Figure 3.3.1 Escape depth for different materials and compounds [5].

To illustrate the main physical processes that occurs during photoemission, a


typical XPS spectrum is shown in Figure 3.3.2. In the bottom part of the Figure,
the diagrams of the occurring processes are shown and related to the top part of
the Figure.

Figure 3.3.2. Typical photoemission spectrum (top) with the occurring processes
(bottom).
~ 90 ~
Several very narrow and intense peaks dominate the spectrum. They are genera-
ted by the direct emission of core level electrons. Smaller features include elect-
rons emitted from the valence band and secondary electrons that constitute the
signal background.
In addition to photoelectron emitted in the photoelectric process, Auger electrons
may be emitted because of relaxation of the excited ions remaining after
photoemission. This Auger electron emission occurs roughly 10-14 seconds after
the photoelectric event. In the Auger process (Figure 3.3.2, bottom (c)), an outer
electron falls into the inner orbital vacancy, and a second electron is
simultaneously emitted, carrying off the excess energy. The Auger electron
possesses kinetic energy equal to the difference between the energy of the initial
ion and the doubly charged final ion, and is independent of the mode of the
original ionization. The atom excited by photoionization relaxes by emission of
either a photon (x-ray fluorescence) or an Auger electron; in the soft -x-ray
regime the second relaxation channel is favoured. The sum of the kinetic
energies of the emitted electrons (photons) cannot exceed the energy of the
ionizing photons.
Probabilities of electron interaction with matter far exceed those of the photons,
so while the path length of the photons is of the order of micrometres, that of
electrons is of the order of tens of angstroms. Thus, while ionization occurs to a
depth of a few micrometres, only those electrons that originate within tens of
angstroms below the solid surface can leave the surface without energy loss.
These electrons, which leave the surface without energy loss, produce the peaks
in the spectra and are the most useful. The electrons that undergo inelastic loss
processes before emerging form the background.
The electrons leaving the sample are detected by an electron spectrometer,
according to their kinetic energy. The analyser is usually operated with an
energy window, referred to as the pass energy, accepting only those electrons
having an energy within the range of this window. To maintain a constant
energy resolution, the pass energy is fixed. Incoming electrons are adjusted to
the pass energy before entering the energy analyser. Scanning for different
energies is accomplished by applying a variable electrostatic field before the
analyser. This retardation voltage may be varied from zero up to and beyond the
photon energy. Electrons are detected as discrete events, and the number of
electrons for a given detection time and energy is stored and displayed.

~ 91 ~
The XPS technique operates under ultra-high vacuum (UHV), to minimize pos-
sible collisions of the emitted electrons with gases in the chamber. The basic
components of the apparatus are the X-ray source, the electron analyser and the
acquisition system. In addition, an introduction chamber to insert samples in the
analysis chamber is needed.
A schematic of a XPS analysis chamber is shown in Figure 3.3.3. In this
example a monochromator is included. The ideal choice for the Al K X-ray
source is a quartz monochromator.

Figure 3.3.3. A schematic diagram of a XPS system

The instrument needs to be calibrated regularly. The best way to check


calibration is to record suitable lines from known, conducting specimens.
Typically, Au 4f, Ag 3d and Cu 2p lines are used to calibrate the binding energy
scale. Each instrument has its own routine to optimize the energy resolution.
When measuring an unknown sample, a broad scan survey spectrum should be
obtained to identify the element present on the surface. Once the elemental
composition has been determined, narrower scans of selected peaks can be used
for a more detailed picture of the chemical composition.

~ 92 ~
In general, interpretation of the XPS spectrum is most readily accomplished first
by identifying the lines that are almost always present (specifically those of C
and O), then identifying major lines and associated weaker lines, and lastly by
identifying the remaining weaker lines.

The identification of chemical states primarily depends on the accurate


determination of line energies. The energy scale of the instrument must be
precisely calibrated; a line with a narrow sweep range must be recorded with
good statistics and accurate correction must be made for static charge if the
sample is an insulator.
Static charge on insulator-it occurs mainly with insulating samples and a
neutralizer should be used. Both charge and charge correction might shift the
peak position. Here are four methods for determining the right position:
a) C 1s of adventitious carbon;
b) internal standard;
c) very thin insulating layer;
d) covering the sample with a thin layer of a known substance.
Charge correction is an art. For many XPS investigations, it is important to
determine the relative concentrations of the various constituents. Methods have
been developed for quantifying the XPS measurement utilizing peak area
sensitivity factors.
A general expression for determining the atom fraction of any constituent in a
sample, Cx, can be written as:

Cx = Ix/Sx / (i Ii/Si) (2)

Usually atomic sensitivity factors are specific for each instrument.


An example of the application of Equation (2) to the analysis of a sample of
known composition, silicon dioxide, is shown in Figure 3.3.4. The experimental
atomic concentration is extremely close to the theoretical value.

~ 93 ~
Figure 3.3.4. An example of the application of Equation (2) to the analysis of a
sample of known composition, silicon dioxide

The use of atomic sensitivity factors in the manner described above will
normally provide quantitative results (within 10-20% uncertainty).

3.3.2 Application Cases


Spectrum Analysis and Element Identification
The present method allows the identification of the elemental composition of the
surface of a material, using XPS. Because the technique collects information
from the first 10 nm below the surface, the results will be related to that range of
thickness.
As we are well aware, each electron emitted form an initially unknown sample
has a kinetic energy characteristic both of the element from which it was
emitted, and of its chemical environment.
By collecting a wide energy scan (typically 0-1200 eV in binding energy), it is
possible visualize peaks coming from all the elements present in the sample.
In Figure 3.3.5 a survey spectrum collected with a laboratory XPS instrument
equipped with a monochromatic Al K source is shown. Most manufacturers of
instruments provide software to identify peaks in a wide spectrum, but data
~ 94 ~
books are available as well. Often the identification of all the elements comes
from a combination of software, data books and operator skills.
In this case, zinc, copper, tin, oxygen, carbon and sulphur have been detected
and the sample identified as Kesterite.

Figure 3.3.5. Survey spectrum

SiO2 Thickness Measurement


The measurement of the thickness of a silicon oxide layer grown on the surface
of silicon wafers has been conducted in the past by many different methods.
These generally apply to oxide layers thicker than 20 nm. It is often important to
measure thicknesses in the range below 10 nm, and this can be done using X-ray
photoelectron spectroscopy [6]. Problems arise in measuring film thicknesses in
this thickness range since, for a layer to bond well to the substrate, it must form
strong inter-atomic bonds at the interface so that a monolayer or more of layer
and substrate interfacial material exists there. This material would not
necessarily be a thermodynamically stable bulk material. Additionally, if the
layer is reactive, its outer surface might have reacted with the environment and
so be changed between fabrication and measurement. For the particular case of
silicon dioxide on silicon, at the interface there is approximately a monolayer of
sub-oxides and, at the surface, adsorbed materials containing carbon, oxygen and
probably hydrogen atoms. These effects lead to offsets for the thicknesses

~ 95 ~
deduced from many methods that, whilst reliably measuring changes in thick-
thickness between one specimen and another, have difficulty in defining an
absolute thickness.
All these problems notwithstanding, we have devised a method to measure the
thickness of a thin layer of silicon oxide on silicon in the range 1-10 nm. The
spectrum of the Si 2p core level in a SiO2/Si sample has different components
relative to the substrate and to the oxides, as is clearly visible in Figure 3.3.6.

300 6
| 1
Si2p
250 |

200
Intensity [cps]

2
150 |

100

5
50 | 4 3 A
| |
0
108 106 104 102 100 98 96
Binding Energy [eV]

Figure 3.3.6 XPS spectrum, Si 2p core line. The different components are
shown.
The signals corresponding to peak 1 and 2 are the two components 2p3/2 and
2p1/2 of bulk silicon. Peak 6 corresponds to the signal coming from the
stoichiometric SiO2, while peak 3 to 5 correspond to nonstoichiometric oxides
Si2O, SiO, SiO3.
If the contribution of peak 3-5 can be considered negligible, we can use the
following simplified equation to calculate the oxide thickness [6]:

dSiO2 = LSiO2·sin·ln[1+(ISiO2/ISi)/R] (3)


where:

~ 96 ~
LSiO2 Attenuation length of electron in silicon dioxide. From the litera-
ture LSiO2 = 3.448 nm [7].
 Emission angle between the analyser axis and the sample
surface.
ISiO2 Area of the peak relative to the oxide.
ISi Area of the peaks (2p3/2 and 2p1/2) relative to the elemental
silicon (substrate).
R Constant independent of silicon dioxide thickness, and
experimentally determined through the formula:
R = I∞/I0.
I∞ Area of the Si 2p core level in a thick silicon dioxide sample
I0 Area of the Si 2p core level in a clean silicon sample

In Figure 3.3.7 the Si 2p core level of a SiO2/Si sample of unknown thickness is


shown. The core level was fitted with three peaks.

5000

Experimental data
Si 2p3/2
4000 Si 2p1/2
Si 2p oxide
fit
Intensity (c/s)

3000

2000

1000

106 104 102 100 98

Binding Energy (eV)


Figure 3.3.7 Si 2p core level of a SiO2/Si sample of unknown thickness.

~ 97 ~
Conclusions
Although XPS is not a new technique, it is still in continuous development. XPS
has unique features compared to other spectroscopies. It is surface sensitive and
can provide information on surface layers or thin film structures. It has a variety
of applications, ranging from polymer to catalyst to semiconductor, from
corrosion to adhesion. From wide scan and core level spectroscopy, it is possible
to perform qualitative and quantitative analysis. The chemical shifts provides
information on the chemistry of the surface.

~ 98 ~
References
[1] A. Einstein, Annalen der Physik17 (1905), 132
[2] P.D.Innes, Proc.Roy.Soc., Ser.A79 (1907), 442
[3] R.G.Steinhardt and E.J.Surfass, Anal.Chem.23, (1951) 1585
[4] K. Siegbahn, C. Nordling, A. Fahlman, R. Nordberg, K. Hamrin, J. Hedman,
G. Johansson, T. Bergmark, S.-E. Karlsson, I. Lindgren and B. Lindberg,
ESCA:Atomic, Molecular and Solid State Structure by Means of Electron
Spectroscopy.Almqvist and Wiksells, Uppsala, Sweden (1967)
[5] L. Braicovich, M.G.Cattania, M. Tescari, Il VuotoXI (1981) 1
[6] F. J. Himpsel, F.R.McFeely, A. Taleb-Ibramini, J.A.Yarmoff, G. Hollinger,
Phys.Rev. B38, (1988) 6084
[7] M. P. Seah and S.J.Spencer, Surf.Interface Anal.33, (2002) 640

~ 99 ~
3.4 RAMAN SPECTROSCOPY
Rocco CARCIONE
carcione@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
Raman technique is a vibrational spectroscopy and it reveals the vibrational
modes of materials. Raman phenomenon is based on the light scattering process:
the sample is irradiated by intense laser beams in the UV-visible region (with ν0
frequency) and the scattered light is observed.
If monochromatic radiation, or radiation of a very narrow frequency band is
used, the scattered light consists of two types. One is elastic Raleigh scattering,
which is strong and has the same frequency as incident beam (ν0). The other one
is inelastic Raman scattering. It is weaker than the incident beam and has
frequencies of (ν0  νm) where νm is the vibrational frequency. In Raman
spectroscopy νm is measured as a shift from the incident frequency (ν0) [1]. For
solid materials, Raman effect is given by the interaction between photons and
phonons. The photons are discrete "packets" of energy that represent light as a
"particle" phenomenon, while the phonons are the vibrational modes of
crystalline lattices. In a solid material, atoms/molecules/ions (dependently on the
solid type) are strictly connected one each other, thus implying that if one of
these starts to oscillate at a certain frequency, also the others will oscillate at the
same frequency. This “domino effect” propagate into the whole material and this
collective oscillation is called phonon. This collective oscillation depends on the
type of the interaction between the elements that constitute the material and thus
the collected signal gives information about the type of atoms/molecules/ions
interaction. Raman effect can be explained with both classical wave and
quantum particle interpretation.

3.4.1. Classical Wave Interpretation


For simplicity, the sample will be considered as a diatomic molecule. From a
classical point of view, the incident light is as an electric-magnetic radiation
(consisting of an electric and a magnetic field) and the molecule can be

~ 100 ~
considered as a dipole. So the situation to be analyzed is an electric dipole pla-
placed into an electric field E.

Figure 3.4.1. Molecule before (a) and after (b) the placement in an electric field

Supposing a constant and uniform electrical field, the molecule will polarize,
as depicted in figure 1b and an induced dipole moment µind is generated in
the molecule:

μ = · (1)

where E is the electric field and α is the molecule’s polarizability. If the


molecule has a permanent dipole moment, µind will be added to that.
Assuming that the molecule interacts with an oscillating electromagnetic
radiation, it is necessary to consider an oscillating electric field. So the induced
dipole moment will be given by:

μ = · · ( ) (2)

Where ω = 2πν is the oscillating electromagnetic radiation frequency. In this


way the molecule emits radiation at the same frequency as the incident light
(elastic scattering). The elastic diffused light intensity differs from that of
incident radiation, but the frequency is the same. The molecule is classically
seen as an oscillating dipole that backscatters the light.
It is important to consider that the backscattered radiation could also have
different frequencies as the incident beam (ν0 ± νm) and it is ascribable to the
molecule’s vibrations. A molecule vibrates even at 0°K and because the
vibration, the relative positions of the atoms change. Therefore, the polarizability
~ 101 ~
of a polarized molecule depends on the molecule’s vibrations. Assuming x as the
displacement from the equilibrium distance between the two atoms (x0) the
polarizability’s change can be developed in Taylor's series around x. α can be
written as:

= + ⎢ +⋯ where α0 is the polarizability in x0 (3)

Assuming that the molecule vibrates with ω0 frequency, it can be written:

= · ( )
And µind will oscillate with two components: one due to the incident radiation’s
electric field and the other due to the molecule’s vibrations.

μ = · · ( )+ ⎢ · ·[ (( −
) ) + (( + ) )] (4)

The first component is referred to the elastic scattering (Rayleigh scattering) and
the second is referred to the anelastic scattering. Thus, the frequencies with
which the dipole scatters the radiation are:
ω : Rayleigh frequency
ω – ω0 : Raman Stoke frequency
ω + ω0 : Raman anti-Stoke frequency.
Since every normal vibration mode has its own frequency, with Raman
spectroscopy all normal vibrational modes of a molecule can be revealed. As a
result, a Raman spectrum can be traced back to the molecule. On the other hand,
in the Raman spectrum of a solid material the phonon modes can be
distinguished. Therefore, a material can be identified from the spectral features
and specifically the connection between the atoms can be determined, thus
allowing for a structural investigation.
For Raman effect there are two selection rules:
1. A normal vibration mode is Raman active if the vibration implies a
polarizability variation, i.e.:

⎢ ≠0

~ 102 ~
2. In the ideal harmonic oscillator’s approximation, vibrational transitions
where the quantum number (v) varies from ±1 are allowed: Δv = ±1.

3.4.2 Quantum Particle Interpretation


From a quantum point of view, the molecule interacts with electromagnetic
radiation represented by photons, often described as energy packets (E=hν).
Photons are both particles and waves, and a wave must always have a frequency
associated with it. So the energy of a photon is a discrete quantity because it is
determined by its own frequency. The interaction of light with matter gives two
phenomena: absorption and scattering.
The basic process of the light-matter interaction can be described by means of
quantum theory: the electron of an atom, a molecule, or an atomic lattice can
absorb a photon and use its energy to jump into an energetically higher
electronic state (absorption). Then, the electron falls down into the ground state,
sending out light in any direction (scattering). When the light meets matter, it
transfers energy to the atoms that deform the electronic clouds of the system.
The system initially is in a vibrational state εi and it is brought to a higher level
of energy, but this is a “virtual level” (εv) that does not correspond to a real
energy level. In order to promote the leap from the fundamental electronic state
to an excited state, precise energy is needed. If the incident light does not have
that exact energy, the energetic jump is not allowed.
Since the energy difference between vibrational states is several order of
magnitude less than the energy difference between electronic states, therefore
each electronic state will be composed of vibrational energy levels [2], as shown
in Figure 3.4.2.b in Jablonski diagram.

Figure 3.4.2. a) schematic representation of Rayleigh and Raman scattering from


a dipole; (b) Jablonski diagram.

~ 103 ~
Excited states are therefore unstable without constant energy input to maintain
the higher energy level. So the system quickly relaxes back down to a
vibrational energy state (εf), by emitting energy equal to the difference between
the two levels (hν’). Specifically, if the system relaxes back down to the
starting vibration state (εf= εi), the scattered light have the same energy than
the incident light (hv’ = hv elastic scattering); while if the system relaxes back
to the another vibrational state (εf ≠ εi), the scattered light have different
energy than the incident light (hv’ = hv + ΔE, inelastic scattering).

So depending on the starting and final vibrational state, the light-matter


interaction could generate:

Elastic scattering  Rayleigh scattering  εf = εi  hv’ = hv

Inelastic scattering  Raman scattering  εf ≠ εi  hv’ = hv + ΔE

It is necessary to consider that the final vibrational state may have higher or
lower energy than the starting state. In the first case (εf > εi), the system gains
energy (ΔE<0 ve böylece v’<v) and the emitted radiation is called “Stoke
radiation”, while in the second case (εf < εi), the system looses energy (ΔE>0
and so v’>v) and the emitted radiation is called “anti-Stoke radiation”. The Stoke
and anti-Stoke lines are symmetric but, since the lowest vibrational levels are
more populate, at room temperature the Stokes lines are more intense, according
the Boltzmann distribution:

∝ (5)

where ν is the frequency of the incident radiation, T the temperature expressed in


Kelvin[1]. By further investigating the quantum interpretation of the Raman
effect, it can be shown that the power of the scattered light, Ps, is equal to the
product of the intensity of the incident photons, Io, and a value known as the
Raman cross-section, σR.

= · (6)
~ 104 ~
3.4.3 Instrumentation
The light source of a Raman spectrometer is in general a laser with a specific
wavelength. The laser’s wavelength can range from the Ultraviolet to the visible
and near-Infrared range depending on the application. The emitted light is
directed to a monochromator and a CCD or photomultiplier is used as detector.
The Raman instrumentation can be coupled with an optical microscope and in
this way the laser spot is focused on a very small area of the sample (<1µm in
diameter). The signal is thus collected with optical fiber and directed to the
monochromator. The best condition for signal acquisition is in backscattering,
i.e. the emitted radiation is collected in the same direction than the incident light.

3.4.4 Applications
Raman spectroscopy was firstly applied to the characterization of materials for
electronics. In fact, Raman analysis on semiconductors allows to determine the
crystallographic orientations, the concentration of impurities, the damage
induced by doping or ion bombardment, the strain (or lattice constant) in
heteroepitaxial films and in heterostructures [3]. Raman spectroscopy is in
general also a powerful tool to study the defects induced by synthesis processes
and post-synthesis treatments and the interfacial stress produced by substrate-
film missmatch.
Although Raman spectroscopy was born as a research technology, nowadays it
has become an analytic technique with applications in various fields, such as
industrial, biomedical, cultural heritage, gemmology, minerals.
In the biomedical filed, Raman spectroscopy has become a powerful diagnostic
tool since Raman spectra allow assessment of the overall molecular constitution
of biological samples, based on specific signals from proteins, nucleic acids,
lipids, carbohydrates, and inorganic crystals. Measurements are non-invasive
and do not require sample processing, making Raman spectroscopy a reliable
and robust method with numerous applications in biomedicine. Moreover,
Raman spectroscopy allows the highly sensitive discrimination of bacteria, gives
information on continuous metabolic processes and Raman spectra are specific
for each cell type thus providing additional information on cell viability,
differentiation status, and tumorigenicity. In tissues, Raman spectroscopy can
detect major extracellular matrix components and their secondary structures [4].

~ 105 ~
Since it is possible to perform measurements without sampling the masterpiece,
Raman spectroscopy has been established as a reliable tool for the noninvasive
analysis of cultural-heritage objects. For these reasons the Raman technique is
also used in the field of gemmology for the identification of precious gems. The
application of Raman spectroscopy to cultural heritage regards: pigments
identification, characterization of new restoration products, study of the
conservation state of substrate, material analyses (precious stone, mosaic
tesserae etc.).

3.4.5 Example of Raman Spectra Analysis


Raman spectral profiles show the intensity of the scattered light (in arbitrary
units) versus the Raman shift (in cm-1). Raman shift is the difference between the
frequency of the incident and the scattered light (Δν). Because in the Raman
spectra Stoke-lines are presented, this difference in frequency is indeed negative
but chemical shift is conventionally reported as positive values. Since the
chemical shift thus corresponds to the difference in energy between the
vibrational levels of the system, Raman spectra are in theory independent of the
excitation source energy.
However, for practical purposes, it should be considered that laser excitation
choice has an influence on Raman spectra. Each source has a different energy
and depending on the source used, the system could exploit the energy for
unwanted luminescence phenomena.
Raman spectra analysis regards:
● Peak assignment: each has specific Raman signals that can be attributed
to the phononic modes. However, when a material is synthesized, it will
contain impurities or multiple phases and the spectrum will also show
further Raman signals ascribable to them. In this way, peak
identification allows a qualitative analysis of the materials.
● Peak position: in pure and crystalline materials each Raman signal is
located in a precise position. Applied stress and strain lead to a shift of
the peak position with respect to that of the regular crystal. This shift
allows to determine the applied stress and strain induced by defects.
● Peak intensity (or area): The intensity of the signal depends on the
structure and the amount of the material. Therefore it allows quantitative
analysis.

~ 106 ~
● Full width at half maximum: The material crystalline exhibit very nar-
row peaks. Lost of crystallinity lead to the broaden of the peak. If the
same system is analyzed in micro or nanocrystalline form, it is observed
that the peak broads and also the position changes.
The analysis of such parameters allow to perform a qualitative-quantitative
analysis, since peak position and FWHM parameters are connected to the
structural quality and the intensity of the signal is related to the amount of the
material.

3.4.6 Case study


Raman spectroscopy is a very suitable technique to identify Carbon materials,
which are present in various allotropic forms. In the figure below Raman spectra
of monocrystalline diamond, high oriented pyrolitic graphite (HOPG), CVD-
diamond film, nanocrystalline graphite are shown. The spectra of these two last
materials exhibit more than one peak both because the defects and/or the
presence of multiple phases. CVD-diamond film and disordered graphite spectra
are reported after a fitting operation that allows a spectrum deconvolution in
order to distinguish the different bands. The fitting operation allows to define
position, Full Width at Half Maximum (FWHM) and intensity of the related
signal.

Figure 3.4.3 Raman spectra of: (a) single crystal diamond; (b) HOPG.
Deconvoluted Raman spectra of: (c) CVD-diamond film; (d) nanocrystalline
graphite.

~ 107 ~
The Figure 3.4.3 (a) and (b) show the difference between the diamond and
HOPG Raman signals Although they are both carbon-based materials, the
organization of atoms within the lattice is different since Carbon’s hybridization
is sp3 in diamond and sp2 in graphite. This implies that the phononic modes are
different and therefore the position of the Raman signals is different.
CVD-diamond is synthetic diamond produced by chemical vapor deposition
process. The figure (c) shows a typical Raman spectrum of a polycrystalline
CVD diamond film. The signal of the diamond line is broader and slightly
shifted to higher frequencies than respect of the monocrystalline diamond. This
is due to the stresses within the diamond lattice. Furthermore in the spectrum
there are signals ascribable to other Carbon features, abstent in the spectrum of
the monocrystalline diamond.
The integrated signals intensities can be then used to determine the percentage of
diamond with respect to non-diamond carbon content present in the film and to
obtain an indication of the film quality, that can be evaluated by the quality
factor parameter (Qfactor), according to the following equation given by Sails et
al. [5]:

= 100 (7)

where IDIA represents the integrated intensity of the first-order diamond line and
IG represents the integrated intensity of the graphite G band. CS is the relative
Raman cross-sections of diamond to graphite of 1/233, this means that the
contribution of non-diamond carbon to the Raman spectrum of a diamond film
appears about 233-times greater than that from the diamond.
Like the diamond, the Raman spectrum of graphite changes in the transition
from the HOPG (a very crystalline material) to the nanocrystalline one. Raman
spectrum of disordered graphite (fig (d)) is characterized by three main features:
D, G and D’ bands. D and D’ bands are due to the defects and the impurities
present within the lattice. G-band arises from the stretching of the C-C bond in
graphitic materials. The ratio between the intensities of the disorder-induced D
band and the first-order graphite G band (ID/IG) is proportional to the degree of
structural order with respect to graphite structure. So the analysis of the bands
parameters is useful to determine the carbon “crystallite” diameter La because
there is a proportional relationship between ID/IG (using fixed excitation laser
energy) and the inverse of La determined from various disordered graphitic
materials. The most acclaimed formula is that of Tuinstra and Koenig [6]:

ID/IG = 4.4/La (8)


~ 108 ~
Conclusions
Raman spectroscopy is very sensitive to analyze and identify compounds,
because each scattering species gives its own characteristic vibrational Raman
spectrum, which can be used for its qualitative identification. Doubtless
advantages as non-destructiveness, contactless measurements, rapidity (the
measurements require only few minutes); sensibility (few nanograms of sample
are needed for analysis); high spatial resolution and no demand for sample
preparation. All these opportunities make Raman spectroscopy attractive,
convenient and effective technique. Thus Raman analysis gives structural
information about:
 Nature, localization, type of interaction, allowing to determinate the
material lattice (for example SiO2 cubic, monocline ecc.);
 Orientation, type of bonds between atoms. In this way Raman analysis is
the most useful technique to study the allotropes (for example C
allotropes, such as diamond, graphite, nanotubes ecc).
 Stress, periodicity. Depending on the revealed signal, stresses within the
material and its crystallinity can be studied.
The problem that may occur in a Raman analysis is that luminescence
phenomena have a greater cross-section than Raman effect. (Fluorescence is a
deactivation process of the lowest excited state and corresponds to the relaxation
of the molecule from the singlet excited state to the singlet ground state with
emission of light. The energy of the emitted light depend on the energy gap
between the ground state and the singlet excited state. Because the vibrations,
fluorescence energy is always less than the absorption energy. Thus the emitted
light is observed at longer wavelengths than the excitation. Since both
fluorescence and Stokes lines signals are observed at lower energy than that of
the incident radiation, the fluorescence may obscure the Stokes lines signals
altogether. The problems connected to the fluorescent processes could be avoid
by analyzing the anti-Stokes lines. To this aim the sample should be warmed to
increase the population of excited vibrational levels. However, this is
complicated from an experimental point of view since Raman spectrometers
have filters that cut-off from the Rayleigh onwards, or may not have a suitable
monochromator.
For this reason, changes have been made to traditional Raman spectroscopy, that
allow to reveal signals that are otherwise difficult to detect. By coupling the

~ 109 ~
sample with nanometric metal particles, the effect of surface plasmons can be
exploited to enhance Raman signals. Plasmons are the vibrational oscillations of
these metal nanoparticles and depend on their size. These oscillations generate
an additional electric field that matches with the radiation emitted by the sample.
Therefore, an enhancement in the signals intensity is observed. This technique is
called SERS (Surface Enhancement Raman Spectroscopy) and it is a promising
alternative that helps to overcome various problems met in case of fluorescence
detection.

~ 110 ~
References
[1] J. R. Ferraro, K. Nakamoto and C. W. Brown, “Introductory Raman
Spectroscopy”, Academic Press. San Diego, CA, (1994).

[2] M. C. Gupta, “Atomic and molecular spectroscopy”, New Age International


(2007).

[3] S. Nakashima and M. Hangyo, “Characterization of semiconductor materials


by Raman microprobe”, IEEE J. Quantum Electron., 25(5), 965-975 (1989).

[4] S. Nakashima and M. Hangyo, “Characterization of semiconductormaterials


by Raman microprobe”, IEEE J. Quantum Electron., 25(5), 965-975, (1989)..

[5] E. Brauchle and K. Schenke‐Layland, “Raman spectroscopy in biomedicine–


non‐invasive in vitro analysis of cells and extracellular matrix components in
tissues”, Biotechnol. J., 8(3), 288-297
(2013).

[6] F. Tuinstra and J. L. Koenig, “Raman spectrum of graphite”, J. Chem. Phys.,


53(3), 1126-1130, (1970).

~ 111 ~
3.5 SECONDARY ION MASS SPECTROMETRY (SIMS)

Massimo BERSANI
bersani@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
Second Ion Mass Spectrometry is based on sample surface bombarded by
primary ion beam and on the observation of charged particles (Secondary Ions)
consequently emitted from a sample surface. When a primary ion impinges the
sample surface two effects are observed: implantation of the primary ion with a
penetration depth (R) of 1-100 A for a 10 KeV primary energy; emission of
several kinds of particles atoms, IONS (around 1% of the total), electrons,
photons, this effect is called sputtering. The ejected secondary ion species are
analyzed and by Mass-spectrometric separation.
The topic of SIMS have been introduced and discussed in several books and
papers [1-4], and since 1975 the state of the art on SIMS topics and
instrumentation developments are presented in a dedicate conference [5] and
related proceeding (recently on Surf. and Int. Anal. special issues).
In figure 3.5.1 is reported the outlines of SIMS mechanisms. The impact of
primary ion on a surface induces an energy and momentum transfer inducing
chemical modifications lattice changes and material loss by sputtering. The
sputtering usually happens by a cascade effect, a prompt sputtering due to direct
collision is a rare event.
The primary beam has a micron or sub-micron dimension and the sputtered
region aroun 500x500 microns is obtained by beam rastering on the surface
The operative conditions require a sample in ultra high vacuum (10-7÷10-11 torr)
and a ion beam with energy between 0.2 and 30 KeV.

~ 112 ~
Figure 3.5.1. Diagram of the SIMS phenomenon indicating the collision of
primary particles with a solid surface; their implantation and emission of
secondary particles

A basic schematic SIMS instrumentation is reported in Figure 3.5.2. The


fundamental elements consist of:
 A Primary beam source. Usually the produced ions are O2+ O- Ar+ Cs+
Ga+Au3+ B2i+ more recently complex molecular ions have been
introduced for surface static application, as C60+ Ar872+
 A sample chamber. In this chamber the analytical effects of sputtering
and secondary ionization. The target sample is introduced in this
chamber by an entry chamber.
 A collecting optics. It allows to select positive or negative secondary
ions, maximize their transmission and to uniform their energy.
 Mass filter system. In order to analyze different kind of masses. There
are different mass analyzer: Quadrupole; mass sector; double focus
magnetic sector; Time of Flight.
 Ion detection. It records the selected ion intensity. Electron multiplier,
Farady Cups, CCD camera and image plate are used usually in
combination in order to cover the whole signal dynamic range (over 9
order of magnitude) and obtain different kind of information.

~ 113 ~
Figure 3.5.2. Schematic diagram of the SIMS instrumentation main components

The SIMS analytical technique presents unique characteristics. The advantages


and disadvantages of SIMS are summarized in the Table 3.5.1 below.

Table 3.5.1. SIMS yönteminin avantaj ve dezavantajları


Advantages Disadvantages
Sensitivity [1ppm-1ppb] Ion yield up to 6 orders of magnitude
All elements are detectable Destructive technique
Isotopic detection Depth resolution depends from sample
morphology
Good depth resolution [1-20 Specific standards are required
nm]
Lateral resolution [0.3-20 Strong matrix effects
μm]
Quantification Samples must be compatible with an ultra high
vacuum
Isolators are analyzable
Minimal sample preparation

SIMS analysis requires a specialist operator , the measurement time can vary
from 5 minutes to several hours. Nowadays there are around 700 facilities in the
world and less than the 50% are available for external samples.
~ 114 ~
SIMS effect was observed for the first time by J. J. Thompson in the 1910 [6]:
“I had occasion in the course of the work to investigate the secondary
Canalstrahlen produced when primary Canalstrahlen strike against a metal plate.
I found that the secondary rays which were emitted in all directions were for the
most part uncharged, but that a small fraction carried a positive charge.”
In order to have forerunners analytical application we have to move forward in
the 1949 [7]. In the 1960’s first commercial instrumentation were available.

3.5.1 Basic Principles


As reported in the introduction the basic physic principles of SIMS are:

 Sputtering
 Ionization

As we said the emission of secondary particles by beam bombardment is called


sputtering. The typical sputtering time are in the range of 10-15 to 10-10 after
primary ions impingement. The collision energy transfer can be:

Emin=0
and
Emax=
( )

where
M1 is the primary ion mass
M2 is target atom mass
E0 primary beam energy
The main measurable parameter linked to the sputtering process is the sputtering
yield. The sputtering yield is the ratio of the number of atoms sputtered to the
number of impinging primary ions. It can varies from 10-3 to 101 and it depends
on primary beam parameters (Energy ; Ion mass; Incident angle) and sample
characteristics (Crystallinity; Topography; Atomic number) [2].
The Sputtering Rate can be derivate from Sputtering Yield as reported in the
following equation:

~ 115 ~
where
SR: sputtering rate
Jp: primary beam density
Yn: secondary ion yiel
ρ: sample density
e: electron charge
The Sputtering Rate can varied from 1 monolayer/hour to 5nm/sec.
The ionization efficiency in SIMS technique is called ionic efficiency, and is
defined as the fraction of sputtered atoms on ionized ones. Ionic performance
varies with many orders of magnitude for the various elements. The most
obvious influences on ionic performance are due to ionization potential for
positive ions and electronic affinity for negative ions.
The ionization process is quite complex and depends on many factors. There are
many theoretical models which, however, only fit in particular cases [8; 9].
However, ionization processes can be subdivided into two main categories:
 Intrinsic emission, consequence to the kinetic energy exchange during
the sputtering process. In this case the ionization is due by an Auger
effect
 Chemical emission due to the presence of reactive species and
dependent by external electron shell interaction. This effect can be
improved by introducing reactive species during the ion bombardment
(e.g. O and Cs).
Each element has a different secondary ion yield with a difference of several
orders of magnitude.
Different ionization efficiency leads to different analysis conditions for different
elements, as indicated in the periodic table (Figure 3.5.3).
Therefore, O2+ is typically used to detect electro-positive species, while for
electronegative species, Cs+ is used.

~ 116 ~
Figure 3.5.3. Secondary beam polarity monitored to obtain best sensitivity

3.5.2 SIMS Analytical Modes


There are two main analytical modes in SIMS analyses and they are distinguish
on sputtering rate parameter.
Dynamic mode is characterized by an high primary ion dose (1015 ions/cm2 ).
Under this condition the structure and chemical composition of the target is
transformed into a new equilibrium state. In the dynamic mode an evident
erosion of the sample is registered, in few minutes is possible to obtain a
sputtering crater of several microns. In Figure 3.5.4 is reported the image of

dynamic SIMS crater.

Figure 3.5.4. Dynamic SIMS crater (300x300 microns) on DLC sample


~ 117 ~
In dynamic operational mode are usually carried out depth profiling or micro-
bulk analyses. The principal field of application is semiconductors, electronics
(around 45%), followed by geological and geospatial applications (15%) [10].
The main dynamic SIMS instruments are equipped with a double focused
magnetic sector in order to analyze the mass. The trajectory of ions entering in
magnetic sector depends by its mass. This detection way allow to monitor the
species from mass 1 to 500 and a limited number of species (up to 5-7) can be
monitor in a single analysis. The mass resolution is around 10.000.
Static SIMS mode is characterized by an low primary ion dose (below 1012
ions/cm2). Its main characteristics are:
● Reduced chemical damage
● Ultra surface analysis
● Elemental or molecular analysis
● Analysis complete before significant fraction of molecules is destroyed

 the static SIMS, due to low primary ion dose, the surface damage is limited
and only a fraction of the upper atom layer is removed (the silicon surface
density is 1015 at/cm2).
The Time-of- Flight SIMS(ToF-SIMS) is the dominant experimental variant of
static SIMS mainly developed by the work of Benninghoven and his group in
Muster [11].
ToF-SIMS is based on time of flight analyzer, a drift region where the emitted
ions travels to achieve the detector. Because the secondary ions are accelerated
to the same kinetic energy, they travel in the drift region with a velocity related
to their mass

= =
2

= ( )
2
where
m is the ion mass
Ld the drift tube length
Ua the accelerating potential

Therefore it is possible to measure the ion mass by its flight time. To measure the ion
masses they have to start at the same time, this effect is obtained by using a pulsed beam.
~ 118 ~
In Figure 3.5.5 is theroted the ToF-SIMS synoptic.

Figure 3.5.5. ToF-SIMS instrumentation synoptic

ToF SIMS instrumentation allow to monitor all negative or positive ions in a semi
parallel mode and the mass range is from 1 to 10.000. The mass resolution is around
7.000. The use of a liquid impulse source allows to achieve a Lateral resolution of 0.2
micron. The main applications are 2D and 3D images and mass spectra.

3.5.3 Depth profiling


The peak intensity analysis according to the erosion time allows to determine the
concentration of the monitored elements within the sample. If the sputtering rate
is constant the simple measurement of the crater allows to convert the time deep.
In this type of analysis the fundamental parameters that characterize the
measurement are deep resolution, dynamic range and sensitivity.
The depth resolution depends by physical factors of the sputtering process and
by parameters related to the measurement process as the crater shape of the.
Among the limiting physical effects in depth resolution is the ionic blending that
is caused by three types of processes: recoil mixing, cascade mixing and
radiation enhanced diffusion.
Recoil mixing is due to the direct impact of primary ions with the sample atoms.
The cascade mixing effect is the result of motion and the bumps of sample
atoms, which have received a quantity of motion from accidental ions, with their
nearby atoms.
~ 119 ~
Radiation enhanced diffusion is driven by induced gradient concentration and
temperature. The shape of the analysis crater has also a direct influence on
profile depth resolution.
To obtain a suitable profile the total area from which secondary ions are emitted
can not be used for analysis, otherwise the crater walls will also be contributed.
In fact, the crater walls do not descend perpendicularly, but with a slope
depending on the shape and size of the beam. For this reason the analysis area is
usually fixed at 1/3 of the total sputtering crater by using mechanical and
electronic gates.
The dynamic range is the ratio between the peak concentration and the minimum
detection concentration. In addition to the factors already seen for deep
resolution, dynamic spiders also depend on the vacuum conditions in the
analysis room and the memory effects.
Other analytical artifacts linked to SIMS depth profile are reported in Figure
3.5.6, with suitable analytical condition and measurement settings is possible to
overcame or minimize this effects.

Figure 3.5.6 Analytical artifacts linked to SIMS depth profile

~ 120 ~
The signal intensity depends from many physical and instrumental parameters:
 Primary Beam Density (Intensity/ Raster Area)
 Sputtering Yield
 Element Concentration (relative)
 Abundancy of Isotope Of Element
 Ion Yield of Element
 Instrument Trasmission+Detector Effıciency

The raw results of SIMS profile is a graphic with ion counts vs time. To obtain
quantitative data counts have to be transformed in concentration and time in
depth (Figure 3.5.6).
The secondary ion yield is the fundamental data to make a quantitative analysis
of SIMS, so its dependence on surface conditions introduces the main problem
in using this analytical technique. The phenomenon is known as "matrix effect"
is meant the drastic change of ionization efficiency from one matrix to another.
To carry out quantitative analyzes it is necessary to analyze standards, samples
similar to the sample to analyze and with element concentration known (ionic
implants). From standard it is so possible to determinate the Relative Sensitivity
Factor (RSF) for a specific element and obtain a quantitative data.
Sputtering time can be transformed in depth by crater measurement or
determining sputtering rate by known references.
Figure 3.5.7 summarize the quantification process.
To monitor positive ions[+] O2+ primary ions beam are used. To monitor
negative ions [-]primary ions Cs-beam are used. An alternative approach is
monitor MCs+ species (where M is the interested species and Cs the primary ion
implanted). The methodology is used in particular in multi-matrix samples in
order to minimize matrix effects. The secondary ion yield is less dependents
from matrix but in general the sensitivity lose 1-2 order of magnitudes.
The sensitivity in SIMS depth profiling is between 1013 and 1016. Measurements
achievable precision is around 0.5% and the accuracy with suitable standard is
below 5% [12].

~ 121 ~
Figure 3.5.7. Quantification schematic of SIMS profile

3.5.4 Applications
Boron Ultra Shallow Junctions Depth Profiling
The aggressive down-scaling of microelectronics devices size requires a
progressive reduction of the dopant distribution junction depth together with
high concentrations and abruptness. Ultra-low energy beam-line or plasma
immersion ion implantation and innovative annealing approaches like flash or
laser annealing allow achieving ultra-shallow distributions confined in the top 20
nm of silicon. However, those processes require an adequate characterization in
order to quantitatively measure high dopant dose and concentrations, to identify
junction depth and to evaluate the abruptness of dopant distributions.
Secondary ion mass spectrometry (SIMS) is the technique able to provide this
information but ultimate depth resolution and quantification protocols are
mandatory. Sputtering with O2+ primary ions and collecting positive secondary
ions allows excellent detection limits. Two sputtering conditions can be chosen
(Figure 3.5.8):
1. ‘fully oxidizing condition’ where a controlled oxygen leak is flooded on
the erosion area: best detection limits and depth resolutions are achieved

~ 122 ~
but boron segregation at surface hinders an accurate quantification at the
native oxide/ silicon interface.
2. ‘not-fully oxidizing condition’, keeping ultra-high vacuum in sputtering
chamber: detection limits are worse than previous case but accuracy in
the top nm of the profile especially at the native oxide/ silicon interface
can be obtained.

Ultra-low impact energy and oblique incidence should ensure low penetration
depth of SIMS primary ions and reduced ion mixing depth with consequent
excellent depth resolution. However, early formation of topography on the crater
bottom can impact the stability of sputtering, affecting depth and concentration
accuracy, especially during erosion in ultra high vacuum. The possibility of
applying a rotation around the normal axis of the sample prevents or reduces the
formation of this topography allowing the best depth accuracy and resolution.

Figure 3.5.8 Boron depth profiles in silicon of 1x1015 at/cm-2 dose with implant
energy varying from 2 to 0.2 keV. Left: profiles acquired in ultra high vacuum/
‘not fully oxidizing’ conditions; right: profiles acquired in O2 flooding/ ‘fully
oxidizing’ condition. Both conditions used sample rotation during sputtering.

The combined effects of ultra-low energy sputtering, oblique incidence and


sample rotation ensures an optimum depth resolution even in ‘not-oxydizing’
O2+ sputtering condition, preventing the early formation of ripples or other
topography usually observed in the crater bottoms of Silicon [13; 14].
Quantification protocols developed cross checking quantitative results with
complimentary techniques can be applied to obtain accurate picture of dopant
distributions in silicon or other semiconductors.

~ 123 ~
Cameca Sc-Ultra 300 mass spectrometer, able to provide O2+ impact energy
down to 0.25 keV with oblique incidence of ~60° with respect to the normal.
Sample rotation and oxygen flooding enable a flexibility of sputtering conditions
in order to find the best methodology for each specific analytic problem.
Quantification based on standards traceable to NIST reference materials and
corrections in the oxide native and at SiO2/ Si interface are applied (Figure 3.5.9)
[15].

Figure 3.5.9 Boron SIMS profile obtained on a B delta doped Si sample grown
by reduced pressure chemical vapor deposition, with the first 5 deltas 5.8 nm
spaced apart. Red triangles indicate delta nominal position.

~ 124 ~
Conclusions
SIMS can be used on nanomaterial characterization limiting to 1D materials.
Indeed only a defined group of nanostructures can be analyzed as uniform
nanocoating, surface modification; quantum well; delta doping. However due to
its unique characteristics SIMS has to be considered as effective techniques for
nanocaracterization. Outstanding sensitivity (1 part for million-1 part for
billion); suitable depth resolution (below 1 nm); capability to detect all elements
(including H and He) and isotope abundance make SIMS an analytical technique
necessary also for nanomaterial characterization at least to support
complementary information.
At the moment the limit lateral resolution of SIMS is around 0.1 micron (by
using Ga or Bismuth liquid ion sources) [15; 16]. Use of new kind of equipment
and ion sources can allow further development to obtain a powerful analytical
tool able to joint SIMS capability to real 3D nanoanalysis.

~ 125 ~
References
[1] A Benninghoven et al. Secondary Ion Mass Spectrometry. John Wiley and
Sons NY 1987, ISBN 0471-01056-1
[2] R. Wilson, A. Stevie, C Magee. Secondary Ion Mass Spectrometry, practical
handbook. John Wiley and Sons NY 1989 ISBN 0-471-51945-6
[3] J.C. Vickerman et al. Secondary Ion Mass Spectrometry principles and
applications. Oxford Science Publications Oxford 1989. ISBN0-19-855625-X.
[4] H Oechsner Thin film and depth profile analysis.. Sping-Verlag Berlin 1984.
ISBN 3-540-13320-8
[5] http://www.simssociety.org/employment.htm
[6] J. J. Thompson, “Rays of Positive Electricity”, Phil. Mag., 20, 252 (1910).
[7] R. F. K Herzog and F. P. Viehbock, “Ion Source for Mass Spectrography”,
Phys. Rev., 76, 855L (1949).
[8] H.J. Jonsson et al. Surface Sci. 180 (1987) 353.
[9] J.H. Weare. Potential energy Surfaces and Dynamic Calculations, Plenum,
N.Y. 1981.
[10] D. Brune, R. Hellborg H.J. Whitlow, O. Hunderi. Surface Characterization.
Wyley-VCH, Weinheim, 1997. ISBN 3-527-28843-0.
[11] J.C. Vickerman and D. Bridggs. ToF-SIMS. IM Publications 2001, ISBN
1-9010019-03-9.
[12] M. Barozzi, D. Giubertoni, M. Anderle, M. Bersani, Appl. Surf.Sci. 231-
232 (2004), p. 768-771.
[13] M. Bersani, D. Giubertoni, E. Iacob, M. Barozzi, S. Pederzoli, L. Vanzetti,
M. Anderle, “Uphill diffusion of ultralow-energy boron implants in
preamorphized silicon and silicon-on-insulator”. Appl. Surf. Sci. 252, 2006,
7315.
[14] D. Giubertoni, E. Iacob, P. Hoenicke, B. Beckhoff, G. Pepponi, S. Gennaro,
M. Bersani, “Quantitative depth profiling of boron and arsenic ultra low energy
implants by pulsed rf-GD-ToFMS”. J. Vac. Sci. Technol. B 28(1), 2010, C1C84.

~ 126 ~
[15] P. Hoenicke, B. Beckhoff, M. Kolbe, D. Giubertoni, J. A. van den Berg, G.
Pepponi, “Depth profile characterization of ultra shallow junction implants”.
Anal. Bioanal. Chem. 396, 2010, 2825.
[15] B. Hagenhoff High Resolution ”Surface Analysis by TOF-SIMS”.
Microchimica Acta April 2000, Vol. 132, Issue 2, pp 259–271.
[16] Markus Kubicek , Gerald Holzlechner, Alexander K. Opitz, Silvia
Larisegger, Herbert Hutter, Jürgen Fleig “A novel ToF-SIMS operation mode
for sub 100 nm lateral resolution: Application and performance” Applied
Surface Science Volume 289, 15 January 2014, Pages 407–416

~ 127 ~
~ 128 ~
SECTION 4
APPLICATIONS

~ 129 ~
~ 130 ~
4.1 INTRODUCTION to SURFACE PLASMONS
AND ITS APPLICATIONS
Giovanni PATERNOSTER
paternoster@fbk.eu
FONDAZIONE BRUNO KESSLER
INTRODUCTION
Plasmonics is a field of photonics that explores the interaction between
electromagnetic field and free electrons in a metal. The term plasmonics began
to be used by the scientific community in the second part of the 20th century.
Despite it might sound relatively new, the effects of interaction of light with
charges at metal surfaces are not. The coupling of light to charges at metal
surfaces is also cited as being utilized hundreds of years before that, but the
science underlying these phenomena at the time was not known.
Within the last 30 years, plasmonics has become one of the most populated field
of research in optics and photonics. Since the 1980s, many developments
contributed to the modern revival of plasmonics. In particular, there were the
works on surface-enhanced Raman scattering. Additionally, advances in tools
for characterizing structures (scanning electron and atomic force microscopy),
fabrication (electron-beam and ion-beam lithography) and nanoscale imaging of
light (near-field scanning microscopy) were key catalysts for the explosion of
research in the past decade or so. But what is plasmonics? S. A. Maier in the
introduction of its book “Plasmonics: Fundamentals and Applications” [1] says:
"You just have Maxwell’s equations, some material properties and some
boundary conditions, all classical stuff - what’s new about that? Well, would you
have predicted that just by imposing appropriate structure on a metal one could
make a synthetic material that would turn Snell’s law on its head? Or that you
could squeeze light into places less that one hundredth of a wavelength in size?
No new fundamental particles, no new cosmology - but surprises, adventure, the
quest to understand - yes, we have all of those, and more.”
From the microscopic point of view, plasmonics effects are related to the
interaction between electromagnetic field and free electrons in a metal that can
be excited by the electric component of light to have collective oscillations, and
should be considered in the context of Quantum mechanics. Despite the quantum
nature of plasmonics, the physics and the lows governing all the plasmonic

~ 131 ~
effects belong to the classical physics, and the most important properties of
plasmonics can be exhaustively explained by means of classical
electromagnetism theory only.
In the last years, plasmonics was exploited and applied to many different fields
of science and technology: from bio-sensor to optical-sensors and solar cells;
from holography to lasers. This work does not claim to carry out a full and
exhaustive treatise of the modern theory of plasmonics and its applications, but it
is just aiming at presenting a brief and clear overview of plasmonics
fundamentals and some examples of its applications. In particular, we will focus
on surface plasmon polaritrons and we will present some examples of
applications in the field of optical sensors. For a more exhaustive and
comprehensive dissertation about plasmonics, a large number of textbooks have
been published in last years. Among the others we encourage the reader to refer
to the already mentioned “Plasmonics: Fundamentals and Applications” [1] or
other excellent text books [2,3].

4.1.1 Surface Plasmon Polaritrons


Surface plasmon polaritrons (SPPs) are electromagnetic excitations propagating
at the interface between a dielectric and a conductor, evanescently confined in
the perpendicular direction. These electromagnetic surface waves arise via the
coupling of the electromagnetic fields to oscillations of the conductor’s electron
plasma. As mentioned in the introduction, the fundamental properties of SPPs
can be obtained just starting from Maxwell equations, dielectric functions of
metals and dielectrics, and some boundary conditions. In this chapter, we will
retrace the key steps of this theory, obtaining and discussing the fundamental

properties of SPPs.

~ 132 ~
Figure 4.1.1. Definition of the geometry. The scheme represents a single
interface between air and a conductor. This is the simplest geometry sup-porting
SPP propagation in the x-y plane.
The simplest geometry supporting propagation of SPPs is that of a single
interface (Fig. 4.1.1) between a dielectric half space (air in this case, at z> 0 )
and a conductor (for z< 0). In our analysis we assume an harmonic time
dependence of the electric field ( , )= ( ) , and we limit the
geometry to one-dimensional spatial coordinates. The starting point of our
analysis is the Helmholtz equation, which in our 1-dimensional example reduces
to:

( )
+( − ) =0 (1)

where E is the electric field, the dielectric function, = is the wave


vector of the propagating wave in vacuum and = is called the
propagation constant of the traveling waves and corresponds to the component
of the wave vector in the direction of propagation. An extended discussion of
properties and applications of this equation can be found in [4].
In this 1-dimensional structure, the SPP is propagating at the surface along the x-
direction and it has to be evanescent along the perpendicular z-direction.
Therefore, the specific solutions of Eq. (1), describing such a wave, have the
form:

( , , )= , >0 (2.1)

( , , )= , <0 (2.2)

The exponential term represents the propagation along the x-direction,


described by the wave vector ; while , describes the evanescent
component along the z-axis, where , , ( = 1,2) are the components of the
wave vector perpendicular to the interface in the two media, for the upper and
lower half-spaces, respectively. The reciprocal value, 1/| |, defines the

~ 133 ~
evanescent decay length of the field in the direction perpendicular to the interfa-
interface.
By considering Eq. (1) and the Maxwell equations, it is possible to gain a set of
independent equations describing the explicit expression of all the components
of both electric and magnetic fields. Such a system of equations allows two sets
of self-consistent solutions with different polarization properties of the
propagating waves. Considering the planar interface perpendicular to z-axis, and
an electromagnetic mode propagating along the x-direction, such a mode can be
classified as (a) transverse electric (TE) or (b) transverse magnetic (TM),
according to whether it possess only a single electric or magnetic field
component along the y-direction, respectively. In TM (or p-polarized) modes,
only the field components , and are nonzero, while in TE (or s-
polarized) modes, , and only are nonzero.

Considering TM modes only, the electric and magnetic fields could be


calculated in both the half spaces in Figure 4.1.1. By imposing the continuity of
the fields at the interface, two important results are obtained:

,
=− (3)
,

( ) ( )
= (4)
( )+ ( )

Note that from Eq. (3), and by considering the convention of sign used in Eq.
(2), confinement to the surface demands ( ) < 0 if ( ) > 0 - the
surface waves exist only at interfaces between materials with opposite signs of
the real part of their dielectric permittivities, i.e. between a conductor and an
insulator.
Eq. (4) is the central result describing SPPs, representing the relationship
between frequency and spatial wave number, and it is named dispersion relation.
The spatial wave number ( ) represents the inverse of the distance over which
the fields undergo one oscillation, while ( ) and ( ) are the frequency-
dependent permittivity of the dielectric (air in this case, ( ) = 1) and of

~ 134 ~
the metal, respectively. Near optical wavelengths, the dielectric function of
many conductors (such as silver or gold) has the form

( )=1− (5)

Figure 4.1.2. Ideal dispersion relation of an SPP at the metal/air interface (a) and
sketch of an SPP field lines propagating at the metal/air interface (b).

Fig 4.1.2. shows the dispersion relation of an SPP (Eq. 4) at the metal/air
interface. It could be noted that the plasmon wavelength is always smaller than
the free space wavelength. For low frequencies, the optical fields of surface
plasmons vary spatially on a scale similar to the free space wavelength and are
delocalized; that is, the fields extend over many wavelengths into the dielectric
space. The bounded nature of SPPs is described by the curvature of the
dispersion curve at higher wavelengths, which lies to the right of the respective
light lines of air and silica. As the wavelength increases, the frequency
approaches the value:

= (6)

Which is called “surface plasmon frequency”, as can be shown by inserting the


dielectric function (Eq. 5) into (Eq. 4). In this regime, the surface plasmon is
characterized by fields that are tightly bound to the metal surface, decaying

rapidly into the free space region. The wave vector goes to infinity as the

~ 135 ~
frequency approaches , and the group velocity → 0. The mode thus
acquires electrostatic character, and it is known as the surface plasmon.
It should be noted that a solution similar to Eq. (2) is not allowed for TE modes.
Thus, no surface modes exist for TE polarization but SPPs only exist for TM
polarization.
The above discussions assumed an ideal conductor with dielectric function
described by Eq. (5), i.e. with ( ) = 0. Real metals however suffer from
both free-electron and interband damping, thus is complex, and with it also the
SPP propagation constant . SPPs propagating at the surface of dissipative
conductor, approach now a maximum, finite wave vector at the surface plasmon
frequency of the system. This limitation puts a lower bound both on the
wavelength of the surface plasmon and on the amount of mode confinement
perpendicular to the interface.
In real structures, the traveling SPPs are damped with an energy attenuation
length (or propagation length) = [2 ( ) ], typically between 10 μm and
100 μm in the visible regime. Furthermore the vertical confinement (quantified
by 1/| |) is in the order of 100 nm at that frequencies. In this regime, the
surface plasmon is tightly bound to the metal surface, and the field is still
confined in a region much smaller than the light wavelength.
It could be noted that pure SPP modes are not coupled to radiative modes (plane
waves, for example). In spite of SPPs are tightly bounded to the surface it does
not make too much sense to talk about field enhancement with this structure.
However, different nano-patterned structures support surface plasmons
simultaneously coupled to radiative modes. When the optical fields present on
such structures are induced by an incident wave, one can describe the field
enhancement as the ratio of the local electric field to that of the incident electric
field. This field enhancement effect has profound implications for many optical
phenomena and applications.

~ 136 ~
4.1.2 Surface Plasmons Excitation
Surface plasmon polaritons on flat metal/dielectric interfaces cannot be excited
directly by light beams as > ( ) , where is the incident angle and

the momentum of the impinging photon, thus the total momentum is not
preserved, even at grazing incidence. However, different techniques can be used
to reach a phase-matching between the incoming beam and the SPP on a
metal/dielectric interface. Two of the most common methods are:
i) prism coupling and
ii) nano-gratings or corrugate surfaces, represented in Figure 4.1.3.a
and Figure 4.1.3.b respectively.

In the first case, a prism, with dielectric constant , is coupled to a thin metal
film, having air on the other side. In this configuration, the metal layer is
sandwiched between the prism and air (with lower refractive index). The
impinging photons, reflected at the interface between the prism and the metal,
have in-plane momentum = ( ). If the metal is thin
enough the in-plane momentum could be sufficient to excite SPPs at the back
interface between the metal and the lower-index dielectric medium (air in this
case).

The mismatch in wave vector between of impinging photons and β can also
be overcome by patterning the metal surface with a shallow grating of grooves
or holes with lattice constant . The dimensions of the lattice constant should be
similar to the wavelength of the incident beam, thus nano-gratings with <
1 should be used in the optical range. For the simple one-dimensional gra-
ting depicted in Figure 4.1.3.b, phase matching takes place when

= ± (7)

is fulfilled, where is the incident angle, =2 is the reciprocal vector of


the grating, and = (1, 2, 3 . . . ). In this case. Eq. (7) could be satisfied due to

~ 137 ~
the diffraction of the incoming light through the grating, which increases the in-
plane wave vector by a factor of , depending on the diffraction order.

Typically, SPP excitation manifests itself as a minimum in the reflected beam


intensity. Figure 4.1.3.c shows the light reflection measured by illuminating a
metal grating with an external beam as a function of the incident angle. When
the phase matching between the SPP and the incident photons is reached, the
reflected light is strongly reduced.

Figure 4.1.3 Schematic diagrams of surface plasmon polaritrons couplers: (a)


metal-coated prism coupler can excite SPP by making use of the total internal
reflection on the prism-metal interface; (b) A metallic grating structures can also
excite SPP by light diffraction to several orders.

4.1.3 Surface Plasmons for Chemical and Bio Sensing


When an SPP is excited on a metal/dielectric interface by an impinging optical
beam, the free incident wave is turned into an SPP mode. This implies two main
consequences:
i) an incident beam propagating perpendicularly to the surface can be
folded in a horizontal surface wave, which propagates along the
surface;
ii) ii) the optical field density close to the surface become much more
intense, leading to local field enhancements up to hundreds or
thousands times. Such a field enhancement effect can influence
~ 138 ~
several optical processes such as fluorescence, Raman scattering
and infrared absorption, resulting in plasmon-enhanced fluorescence
(PEF) [5], surface enhanced Raman scattering (SERS) [6], and
surface-enhanced infrared absorption spectroscopy (SEIAS) [7].
The peculiar characteristics of plasmonic structures can be exploited for
developing high-sensitivity chemical- and bio-sensors [8-10]. Among the above-
mentioned available sensing technologies, the plasmonics-based refractometric
sensors could be considered one of the simplest plasmonic sensing system [9]. A
typical scheme of such a sensor, based on a prism coupling, is depicted in Figure
4.1.4. One surface of the prism is coated with a thin metal film, typically 20-50
nm of Gold or Silver. In this configuration, the metal is highly reflective except
at a specific angle when the SPP is excited, referred as SPR angle, which
satisfies the conservation of total momentum. The metal surface is typically
treated with some elements (ligands) aimed at detecting a specific molecule
(analyte). When the molecules bind to ligands at the metal surface, the SPP
resonance frequency is perturbed, as is strongly dependent on the dielectric
function of the surrounding medium, and any modifications at the surface will
cause a shift in the spectral properties. Therefore, SPP band red-shifts due to the
higher refractive index of the molecules than the aqueous solution, functioning
as a sensor [11]. The performance of such a sensor can be evaluated by means of

the refractive index sensitivity parameter ( = ) expressed in units of nm

per RIU. This parameter evaluate the shift of the plasmon frequency (∆ ) for a
unitary variation in refractive index of the surrounding medium.
Similar plasmonics-based refractometric sensors could be fabricated by
replacing the flat metal film and the prism with a large-area periodic nano-array
pattern such as gratings, nanosphere arrays, nano-disc arrays and nano-holes
arrays. In particular, plasmonics nano-hole arrays are particularly suitable for
integration into microfluidic systems [12] for real time measurements of
antibody-ligand kinetics. For example, recent studies have demonstrated a high-
performance microfluidic nano-hole array with a refractive index sensitivity of
1520 nm/RIU.

~ 139 ~
4.1.4 Plasmonic Photodetectors
Another vigorous application of plasmonics is in the field of photodetectors for
visible and infra-red (IR) light [13]. A plasmonic photodetector is a device
capable of detecting light involving surface plasmons in the photodetection
process. Such a detector typically combine a metallic structure that supports
surface plasmons with a standard photodetection structure. The Silicon p-n
junction could be considered one of the most common used photedection
structure. It detects the light via electron-hole pairs (HEP) mechanism, which
involves 3 main steps:
(i) optical absorption in the semiconductor creating electron-holes
pairs,
(ii) separation of EHPs and transport across the reverse biased junction,
(iii) collection of the photogenerated carriers at the device contacts.
Such a structure has numerous advantages in terms of performance, costs and
reliability due to the well-established Silicon manufacturing technology.
However, Silicon has some intrinsic limits, as it is a weak-absorbing material at
optical wavelength close to its band-gap (1.1 eV). Photons in the near IR range
(900-1100 nm), pass-through thin Silicon detectors without being absorbed and
detected.
Metal nanogratings coupled to semiconductor photodetectors have the potential
to enhance the optical absorption through SPPs excitation facilitated by light-
metal interactions. SPPs features evanescent and high confined EM field at a
metal/dielectric surface. When the SPP is sufficiently close to the active region
of the detector, an effective light confinement is obtained in a subwavelength
region of the semiconductor leading to an increase in light absorption. Such an
effect has been extensively exploited for developing of IR sensitive
photodetectors [13] as well as photovoltaic applications [14].

~ 140 ~
Figure 4.1.4 Scheme for a plasmonic sensing system based on the prism
coupling configuration. The reflection of the incident light by the metal film
shows a dark line due to the SSP absorption. This plasmonic sensing system can
measure angle-resolved responses upon the binding of analytes. Reproduced

from [10].
Figure 4.1.5. Scheme for a plasmonic photodetector based on Silicon detector
and nano grating supporting SSPs : 3-dimensional sketch(a) and cross section
(b). The magnitude of the magnetic field at the detector surface when the system
is illuminated with a 950 nm planar wave is reported in c), while the fraction of
~ 141 ~
the optical power absorbed in the detector as a function of the incoming light
wavelength is plotted in d).
A possible plasmonic photodetector structure is represented in Figure 4.1.5
where a 1-dimensional (1D) metal grating is placed on the top of a thin Silicon
photodetector passivated with a thin dielectric film. Such a structure is
conceived to be illuminated from the top surface. This specific detector is
optimized to enhance the absorption of NIR light with wavelength of 950 nm.
The detector thickness is 3 μm only, which originally (without any plasmonics
structures) absorbs the 4% of photons at that wavelength. When the grating
lattice constant is suitably tuned, the incoming light beam pass through the
grating and excite SPP at the bottom metal/dielectric surface. The incoming
photons, travelling as a free wave perpendicularly to the detector and originally
passing through the weakly absorbing silicon, is now converted in a SSP, which
travels along the detector surface, which is much longer than the detector
thickness, thus increasing the absorption probability.
The field enhancement on the detector surface can reach values from 10 to 100
folds, leading to a proportional increase of the photo-generated current. This
effect is clearly visible in Figure 4.1.5.c where the magnetic field magnitude is
reported. In this figure is clearly visible the EM field enhancement close to the
detector surface and the periodic behavior of SPPs at the metal/semiconductor
surface.

Figure 4.1.5.d reports the total fraction of the power absorbed in the 3 thick
silicon as a function of the wavelength. It is worth noting that the absorption is
peaked at 950 where the 28% of photons are absorbed in just 3 thick
Silicon, an 8 times enhancement with respect the reference structure (with the
same Antireflective coating but without any metal grating).
Other different structures, exploiting SPPs in the photodection process, have
been studied in recent years. The involvement of SPPs has led to detectors with
improved performance and greater functionality. SPP detector architectures are
highly varied, due to the diversity of metallic structures that support SPPs, and
the diversity of detection schemes and materials that are available. An
exhaustive review on plasmonics photodetectors is reported in [13] by P. Berini.

~ 142 ~
References
[1] Maier, Stefan Alexander, “Plasmonics: fundamentals and applications”,
Springer Science & Business Media, 2007.
[2] “Modern Plasmonics”, Edited by N. V. Richardson and Stephen Holloway,
Handbook of Surface Science, North-Holland, 2014.
[3] Sarid, Dror, and William Challener, “Modern introduction to surface
plasmons: theory, Mathematica modeling, and applications”. Cambridge
University Press, 2010.
[4] Yariv, Amnon, “Optical Electronics in Modern Communications”, Oxford
Univeristy Press, Oxford, UK, fifth edition edition, (1997).
[5] Russell, K. J., Liu, T-L., Cui, S. & Hu, E. L. Nature Photon. 6, 459–462
(2012).
[6] Fan, M., Andrade, G. F. S. & Brolo, A. G. Anal. Chim. Acta 693, 7–25
(2011)
[7] Osawa, Masatoshi, et al. "Surface enhanced infrared absorption
spectroscopy." Analytical sciences 7. Supple (1991): 503-506.
[8] Anker, Jeffrey N., et al. "Biosensing with plasmonic nanosensors." Nature
materials 7.6 (2008): 442-453.
[9] Brolo, Alexandre G. "Plasmonics for future biosensors." Nature Photonics
6.11 (2012): 709-713.
[10] Li, Ming, Scott K. Cushing, and Nianqiang Wu. "Plasmon-enhanced optical
sensors: a review." Analyst 140.2 (2015): 386-406.
[11] Mayer, Kathryn M., and Jason H. Hafner. "Localized surface plasmon
resonance sensors." Chemical reviews 111.6 (2011): 3828-3857.
[12] Pang, Lin, et al. "Spectral sensitivity of two-dimensional nanohole array
surface plasmon polariton resonance sensor." Applied Physics Letters 91.12
(2007): 123112.
[13] Berini, Pierre. "Surface plasmon photodetectors and their applications."
Laser & Photonics Reviews 8.2 (2014): 197-220.
[14] Atwater, Harry A., and Albert Polman. "Plasmonics for improved
photovoltaic devices." Nature materials 9.3 (2010): 205-213.

~ 143 ~
4.2 ELECTRONICS APPLICATIONS
Meltem BALABAN
mltm.blbn@gmail.com
PAMUKKALE UNIVERSITY

INTRODUCTION
Preparation of this chapter required extensive resource investigation and
consolidation, from current and future-oriented scientific, industrial and
“roadmaps” points of views.
“Electronics applications of nanotechnology” should be considered as a
synonym of “nanoelectronics applications”. Hence, this chapter is organized
accordingly, starting from introduction of fundamental concepts of
nanoelectronics. Nanoelectronics can actually be considered as the
“prerequisite” of novel nanoelectronic devices, meaning, devices that are
produced using nanomaterials and/or nanofabrication techniques in nanoscale. It
is a well acceptable fact that, one of the main consequent compact
“nanotechnology-facilitating nanoelectronic devices” would be the
“nanocomputer” (a variation is the quantum computer). Therefore, after a brief
introduction to nanoelectronics, the nanocomputer concept is introduced within
this chapter. Then, other current nanoelectronic studies/applications are
introduced, respectively. In this chapter, nanoelectronics development areas are
given in a separate section. Since nanoelectronics is a novel subject area under
research on its own, most of the studies that are mentioned in different sections
constitute the development areas of nanoelectronics, too. The chapter is finalised
with a conclusion, featuring a nanoelectronics roadmap report.
Figure 4.2.1 clearly shows that electronics as one of the main application areas
of nanoparticles. Nanosensors, nanotransistors, nanocomputers, nano scale (1-
100 nm in length) integrated circuits (ICs) and nano scale/nanotechnological
data storage are related attention-grabbing sub-components of nanoelectronics
taking part in the Figure 4.2.1.

~ 144 ~
Figure 4.2.1 Applications of Nanoparticles (Image Resource: "Commercial scale
production of inorganic nanoparticles"[2])

4.2.1 Nanoelectronics
Nanoelectronics is based on the application of nanotechnology in electronics and
electronic components. It can be said that current main building blocks of
electronics are transistors, sensors and memories. Nanoelectronics generally
recalls the field of electronic components, but special attention should be given
to transistors.
A transistor is a device that regulates current or voltage flow and produces
electronic signals. Transistors consist of layers of a semiconductor material,
capable of carrying a current. Modern ICs(integrated circuits) use a technique
called complementary metal-oxide semiconductor (CMOS) which uses a pair of
transistors, one using electrons and the other electron holes. The semiconductor

~ 145 ~
material used in CMOS is silicon. But in silicon, electron hole mobility, and
performance at higher temperatures and at transmitting light is very poor.
MEMS stands for Micro Electro Mechanical Systems. Currently, MEMS consists
of man-made mechanical elements, sensors, actuators and electronics that that
were produced using microfabrication technology and are integrated on a silicon
substrate. The word MEMS is used frequently for miniaturized devices that are
based on Silicon technology or chemical or mechanical traditional precision
engineering.
The graphene transistor can overcome the limits of silicon and give way to
flexible electronics. It allows electrons to move at an extraordinarily high speed.
The ICs based on graphene transistors (first developed by IBM[10]) are built on
a wafer of silicon carbide, and consist of field-effect transistors (FETs) made of
graphene.
NEMS stands for Nano Electro Mechanical Systems. NEMS extend
miniaturization further toward the ultimate limit of individual atoms and
molecules. NEMS are man-made devices with functional units on a length scale
between 1 and 100 nm. Some NEMS are based on the movement of nanometer-
scale components. It can be implied that NEMS is used for nanodevices based
on “graphene-and-beyond technologies” and traditional and non-traditional
engineering.
ASIC stands for Application Specific Integrated Circuits. They are non-standard
integrated circuits that have been designed for a specific use or application.
Generally an ASIC may contain a very large part of the electronics needed on a
single integrated circuit. They enable significant amounts of circuitry to be
incorporated onto a single chip. Board area is needed if the circuits are
assembled using proprietary chips.
Nanoelectronics extends miniaturization further toward the ultimate limit of
individual atoms and molecules. On such a small scale, billions of devices could
be integrated into a single nanoelectronical system. Nanoelectronics can be
considered a disruptive technology because present candidates for
nanoelectronical functional elements are significantly different from traditional
transistors.
Lithography is discussed in Section 2.8 of Nanotechnology 1 book.
Photolithography is the most widely used technique in microelectronic

~ 146 ~
fabrication. It is used particularly for mass production of ICs. It is the process of
transferring geometric shapes on a mask to the surface of a silicon wafer.
Nanolithography is used in nanoelectronic fabrication. Nanolithography would
be used, for example, for the nanofabrication of ICs (nanocircuits, at nanoscale)
and NEMS, and for many other multidisciplinary applications resulting from
nanoresearch. X-Ray lithography and nanoimprint lithography are two
promising currently- used nanolithography techniques for fabrication of memory
circuits, and other nano scale nanocircuitry.
Nanoelectronics (circuits built with components on the scale of 10 nm) may
become successors to lithographic based ICs. Nanoscale transistors have a size
lesser than 100 nanometers. Doubtlessly, they are so extremely small that
specialised studies have to be made for knowing their quantum mechanical
properties and inter-atomic design. As a result, the transistors are in the
nanometre range, and they are designed through nanotechnology. Their design is
also very different from the traditional transistors and they are mentioned
together with one-dimensional nanotubes/nanowires, and hybrid/advanced
molecular electronics.
Nanotechnology puts emphasis on miniaturization. Nanoelectronics, especially
the nanotransistor, is one of the best application areas regardingly. Remarkable
technological progress is achieved in reduction in the size of transistors and
increase in the number of transistors per chip. Chip designers can create more
complex integrated circuits, using more transistors per chip. This provides
remarkable transformation effect on society and can produce evolutions in the
short term and revolutions in the middle/long term, over current pure-mechanical
structures and technologies. One of the distinctive industrial outcomes of this
issue would be the automotive industry. We all witness the fact that past times’
completely-mechanical object, the automobile, nowadays, owns a very large part
of its value to electronics (the engine computer, the airbags, the anti-skid brakes,
etc.).
Current application areas of nanoelectronics can be summarised as improving
display screens on electronic devices (by reducing power consumption while
decreasing the weight and thickness), increasing the density of memory chips,
and reducing the size of transistors used in integrated circuits so that extremely
small nanocomputers can be produced.
Some Nanoelectronics Applications Under Development are:

~ 147 ~
 Flexible electronic circuits consisting of cadmium selenide nanocrystals
deposited on plastic sheets used in flexible electronics,
 "Nanoemissive" display panels where carbon nanotubes are used to
direct electrons to illuminate pixels, resulting in a lightweight,
millimeter level thickness,
 Using electrodes made from nanowires that would enable flat panel
displays to be more flexible and thinner than current flat panel displays,
 Displays using quantum dots, where quantum dots replace the
fluorescent dots used in current displays. They are simpler to make than
current displays and they use less power.

Nanotransistors
Computer Engineering, Computer Sciences and Electronics Engineering
undergraduate students are supposed to know about Mr. Gordon Moore (one of
the co-founders of Intel) and Moore’s Law. His law states, in a simplified way,
that computer processor speeds will double every two years. In practice, over the
passing years, processor speeds started to become almost indifferent while the
number of transistors in the CPU (Central Processor Unit) went up even to
several millions in number. It became more accurate to apply Moore’s law to
transistors than to speed. Therefore, we can say that this law specifically stated
that the number of transistors on a CPU would double every two years. It may be
said that as nanotransistors being as small as atomic particles are produced, then
speed and number of transistors will not be sufficient parameters for determining
CPU efficiency. It has already become very difficult to dissipate the heat
generated by a high speed CPU. The more transistors are packed to a CPU, the
greater the power density that must be dissipated.
Nanotransistors are nano-scale transistors. Figure 4.2.2 shows the international
roadmap for the physical gate length of transistors in years, showing nanoscale
transistors’ size trends.

~ 148 ~
Figure 4.2.2 Physical gate length of transistors in years. (ITRS 2.0 Offıcial
Publication, Illustration:Erik Vrielink, Source:IEEE Spectrum)

Electronic Nanosensors
An electronic sensor is a component that senses physical inputs and produces
output on a display or in electronic form which is due to signal processing. An
electronic nanosensor is produced at nanoscale. Electronic sensors are based on
transistor structure (FET nanosensor,etc.). Two main application sectors of
electronic nanosensors that are covered in this chapter are healthcare and
automotive.
Sensors used in automotive can be categorized as inertional and motional
sensors (level, torque, proximity, pressure, motion, and position sensors), driver
assistance sensors (laser, radar, image, ultrasonic sensors), and environmental
monitoring sensors (temperature, rain, humidity, gas, particulate matter sensors).
Sensors used in medicine, on the other hand, can be categorized as implantable
sensors (used in bionic eye; cochlear or auditory brainstem implants for ear;
electrical bone growth stimulators for orthopedia; pacemaker, artificial heart or
heart valves, and ventricular assist device for cardiac; neurostimulators for
neural/brain), wearable sensors, and other medical sensors. Wearable sensors

~ 149 ~
can be inserted in hearing aids or glasses, carried on the wrist (bands or watches)
or on the body (hats, socks or shoes) and as neck wear. Their use is expected to
increase in the healthcare sector as the trend towards IoT increases and evolution
in circuit miniaturization, low power microcontrollers, front-end amplification,
and wireless data transmission continues.
Most of the sensor types mentioned above can be used in many other sectors
such as consumer electronics, defense, and energy sectors.
Main sensor types in autonomous-self driving-cars are image devices (cameras,
infrared, and light-detection-and-ranging sensors), short and long range radar
sensors, laser and ultrasonic sensors.

4.2.2 Nanocomputers
Nanocomputers are computers, which are very much smaller than their
predecessors, the minicomputer and the microcomputer, and they are constructed
of nanoscale components, using nanotechnology. An entire nanocomputer itself
may be microscopic. A nanocomputer has its parts at nanoscale, each one of
them being a few nanometers in size.
Computer Engineering and Computer Sciences undergraduate students from all
over the world are also supposed to be technically-taught about the Von
Neumann concept of computer design and the so called Von Neumann
architecture. Von Neumann architecture of a computer is composed of three
main components, namely, a central processing unit (CPU-a microprocessor
currently), memory, and input/output (I/O) interfaces. There are several ways to
interconnect these components. The components are connected to each other
through a collection of signal lines known as bus lines. In a nanocomputer, the
components would be produced at nano-scale, using nanotechnologies and the
bus lines could be replaced by nanowires, or quantum buses, if quantum
computers are in question.
Nanocomputers are supposed to be operating in the following way: The
conventional computers with current processors will be replaced with
nanocomputers using nanoprocessors. These nanoperocessors will provide
higher performance and speed than the ones in conventional computers.
Researchers are experimenting to design better nanoprocessors by using
nanolithographic methods. Experiments are also performed for replacing the

~ 150 ~
CMOS components in conventional processors with nanowires. In nanocompu-
nanocomputers, the FETs might be replaced by carbon nanotubes.
There are no well-known commercially available nanocomputers in existence
currently. They can only be built using specialized molecular manufacturing
techniques (the molecular fabricator, the molecular assembler are examples of
novel ongoing studies on these techniques). Molecular manufacturing techniques
are not discussed within the scope of this book.
Nanocomputers may be constructed using electronic, mechanical,
chemical/biochemical or quantum technologies, electronic nanocomputers being
the fastest. Electronic nanocomputers operate similar to current microcomputers.
The main difference between the two types is the physical scale.
Chemical/biochemical nanocomputers would store and process information in
terms of chemical structures and interactions. Biochemical nanocomputers
already exist in nature, such as trees or antibodies, but we cannot program them.
In order to develop a chemical nanocomputer, engineers must find ways to make
individual atoms or molecules to perform controllable calculations and data
storage tasks. At this stage, it can be said that the development of a chemical
nanocomputer is similar to the process of genetic engineering. Genetic engineers
must find out how to get DNA (deoxyribonucleic acid-carrier of genetic
information) to alter an individual organism. Computer Sciences Engineers have
to make individual atoms or molecules perform controllable calculations and
data storage tasks, in order to develop a chemical nanocomputer. Mechanical
nanocomputers would use nanogears (gears at nano scale), rather than electronic
components, to encode information. Quantum computers would store data in the
form of atomic quantum states (or in quantum spin, a form of angular
momentum). Theoretically, the energy state of an electron within an atom can
represent one. Electron energy states are difficult to predict and control, hence,
quantum technology currently undergoes instability problem.
It is unlikely that nanocomputers will be made out of semiconductor transistors
since they do not function well below approximately 50 nm. Current chips
produced by nanolithography could be considered "nanotechnology," because of
their transistors below 100 nm size scale. The process of nanolithography,
however, may not be capable of producing true nanocomputers with almost
ultimate precision. Nanotechnology will enable the creation of nanocomputers
that contain as many transistors per unit volume as the limits of the atomic
structure of matter permits. They will be far more efficient, producing much less
waste heat and allow for "stacking" of transistor elements to be into the third
~ 151 ~
dimension. Nanocomputers will be built such that every atom they are composed
of is utilized as a computational element.
Nanocomputers will play a very important role in high performance
management of “big data” (huge volume of structured or unstructured data) and
artificial intelligence, in addition to their superior supercomputatinal roles. The
massive increase in data being collected in almost every industry including
energy, transportation, manufacturing, medical, computing, telecommunications,
education, finance, public administration, and social networks implies that
nanocomputers will become almost vital for almost every technical,
administrative, and social content management in the future.

Nanotechnology Used In Computer Memory


Hard drives used as long-term memory in computers consume more power and
have more chance of failure than solid state memory whose parts are unmovable.
For this reason, solid-state computer memory has become popular on smaller
computers, such as tablets. Solid-state computer memory occupies less space,
uses less battery power, and is less likely to be damaged if the device is dropped.
Nanotechnology is being used to improve the density of solid-state computer
memory.
Other selected computer memory applications under development, where
nanotechnology is used are listed below:

 Solid-state drives store information on a type of transistor called flash.


Currently, nanolithography techniques are used to fabricate flash
memory chips with sizes as small as 20 nm.
 Memristors (memristor theory found by Leon Chua in 1971) are
basically a fourth class of electrical circuit, joining the resistor, the
capacitor, and the inductor, that exhibit their unique properties primarily
at the nanoscale. A memristor can be used as a single-component
memory cell in an IC. By reducing the diameter of the nanowires used,
researchers believe memristor memory chips can achieve higher
memory density than flash memory chips. In the field of RAM (Random
Access Memory), which can be defined as short-term memory of
computers, ReRAM(Resistive Random Access Memory) and MRAM
(Magnetoresistive Random Access Memory) are two successors of
memristors currently undergoing research and development process.
 An alternative method being developed to increase the density of
memory devices is to store information on magnetic nanoparticles.
~ 152 ~
4.2.3 Nanoelectronics in Communication Systems
Currently, there is a world-wide effort to design and develop advanced wireless
communication systems to meet the ever increasing demand for faster and
reliable exchange of large amounts of data. Currently, these wireless systems
(5G and beyond) may need thousands of transmitters and receivers to be
concentrated at the base stations as well as mobile user devices. On the other
hand, the use of massive number of antenna combinations requires new and
innovative ways to process the incoming and outgoing data. Towards this goal,
Prof. Hasan Şehitoglu has developed matrix-valued signal processing algorithms
[1]. For example, his matrix-valued fast-Fourier transform technique is ideally
suited for multi-input multi-output (MIMO) OFDM communication systems. In
practice, these algorithms and their physical implementations can only be
realized by employing compatible nano scale infrastructures and technologies
(such as molecular nanotransmitters and nanoreceivers using quantum dynamics,
carbon nanotube antennas, other types of nanoantennas, as well as handling
nanosensors’ signal processing and big data, nanorobots and artificial
intelligence technologies).
Typical “primary usage areas coming to mind” of nanoelectronics in
communication systems would be satellites, mobile phones, autonomous
vehicles (such as self-driving cars and drones), and, actually, all the rest of
mobile devices/things that will become actors of “Internet of Things” in the
future. (The Internet of Things (IoT), or more comprehensively, The Internet of
Everything (IoE), can be described as a system of computing devices, machines,
objects, and living beings that have the ability to transfer data over a network by
themselves, without requiring direct interference of humans or computers.)

4.2.4 Nanoelectronics in Medicine


Nanoelectronics, in medical sector, undergoes an exploitation phase, compelling
the traditional strengths of the semiconductor industry – miniaturization and
integration. Conventional electronics already have many applications in
biomedicine, such as medical monitoring of vital signals, biophysical studies of
excitable tissues, implantable electrodes for brain stimulation, pacemakers, limb
stimulation. The use of nanomaterials and nanoscale applications will bring a
further push towards implanted electronics in the human body (such as
wirelessly controlled orthopaedic nanoimplants) and bio/nanoelectronic devices

~ 153 ~
and nanosensors replacing and/or aiding human body functionalities and organs
(such as wirelessly powered hearing aids, and artificial smelling devices).
Some research advances in this area are:

 development of a nanobioelectronic system that triggers enzyme


activity,
 electrically triggered drug release from smart nanomembranes,
 artificial retina for color vision,
 nanogenerators to power self-sustained biosystems and implants,
nanocomputer chips inside living cells.
A popular subject area of medical nanoelectronics is brain research. Examples of
studies in this research area are the use of carbon nanotube ropes to electrically
stimulate neural stem cells; and to repair the brain and other advances in
fabricating nanomaterial-neural interfaces for signal generation.

4.2.5 Research & Development Areas in Nanoelectronics


Current and emerging research&development areas that could affect the
roadmap for nanoelectronics can be consolidated under general categories as
follows:
 Beyond CMOS Technologies (novel nanotransistors, nanoscale
FETs,etc.)
 Quantum computing
 Molecular electronics
 Nanoscale computer memory technologies
 Integration technologies for nanoelectronic devices
 Modelling and simulation tools for nanoelectronic devices
 Characterization tools for nanoelectronic devices
 More Moore and More than Moore technologies (novel nanocomputer
architectures, MEMS-NEMS and related active and passive
components’ transformation/integration)
 Connectivity of nanoelectronic devices (Connectivity challenges)
 Electronic Nanosensors Technologies

~ 154 ~
Conclusions
The following sentence is taken from the Mid-term Roadmap published within
the NEREID project (ICT-CSA-685559), which is supported within the
Research and Innovation Programme Horizon 2020 by the EU [8]:
“Understanding the dependencies between short/medium term (e.g. More Moore
and More than Moore) and long/very long term (e.g. Beyond CMOS) activities is
also very important to speed-up technology transfer between academia and
industry using disruptive technologies leading to possible new large future
markets.”
The sentence above describes very well the dependence of future technologies’
positive contribution to society on research&development success stories in
novel subject areas such as nanoelectronics. The “More Moore”, “More than
Moore” and the “Beyond CMOS” are mentioned earlier parts of this chapter.
Another highlight from the NEREID project Mid-term Roadmap can be deduced
from the following paragraph:
“The development and production of the high-end digital circuits are
concentrated in a few factories outside Europe. The European microelectronics
industry still relies on production of circuits but the role of MEMS and ASIC
applications is growing. This development opens up a possibility to shift the
focus to novel intelligent sensing and distributed computation applications,
which need a new generation of skilled scientists and engineers for hardware,
software, materials and process development. This will then most likely impact
in a positive manner economy, employment and academic curricula.”
The above paragraph can be adapted to current technological trends and can be
implied that during the shift from microelectronics to nanoelectronics, focus
should also be directed to novel intelligent sensing and distributed computation
applications and related trained hardware, software, materials and process
development scientists and engineers.
In conclusion, nanoelectronics, which is one of the most important basics for
future nanotechnological developments, is a vast research area and it requires a
vast amount of investment and coordination, too, in order for development of
nanoelectronic applications that will contribute to positive progress of
humankind.

~ 155 ~
References
[1] Sehitoglu, Hasan, 'Matrix-valued methods and apparatus for signal
processing', US Patent No:7296045, November 13, 2007.
[2] Commercial scale production of inorganic nanoparticles, April 2009,
International Journal of Nanotechnology 6(5):567-578,DOI
10.1504/IJNT.2009.024647, T. Tsuzuki
[3] The Future of Integrated Circuits: A Survey of Nano-electronics
Michael Haselman and Scott Hauck, Department of Electrical
Engineering, University of Washington, Seattle, WA
haselman@ee.washington.edu,hauck@ ee.washington.edu
[4] Internatıonal Technology Roadmap for Semiconductors(ITRS) 2.0 2015
Edition Executive Report
[5] Karkare, M. (2010). Nanotechnology : Fundamentals and applications
(2nd repr. ed.). New Delhi: I. K. International Publishing House Pvt.
[6] MEMS and NEMS: Systems, Devices, and Structures, A Lyshevski,
S.E., ISBN 9781420040517, 2002, CRC Press
[7] Nano Lithography,Stefan Landis, ISBN: 978-1-118-62170-7, Mar 2013,
Wiley-ISTE
[8] The NEREID Nanoelectronics Roadmap for Europe,
https://www.nereid-h2020.eu/content/nereid-mid-term-roadmap-
download
[9] https://irds.ieee.org/, IEEE International Roadmap for Devices and
Systems
[10] https://spectrum.ieee.org, First Graphene Integrated Circuit, By Neil
Savage, 9 Jun 2011
[11] http://www.understandingnano.com
[12] https://phys.org
[13] http://www.thenanoage.com/nanocomputers.htm

~ 156 ~
4.3 APPLICATIONS of NANOBIOTECHNOLOGY and
NANOBIOMEDICINE

Arzu YAKAR
ayakar@aku.edu.tr
AFYON KOCATEPE ÛNIVERSITY
INTRODUCTION
The recent advancements in technology enabled the nano-scaled matters to be
detected accurately and controlled for the past 25 years. Nanotechnology is
considered as an interdisciplinary science for its impacts on many science fields
from physics to chemistry, biology etc. As the size of a matter changes between
1 nm and 1000 nm, the properties and behaviors of different sizes of the same
matter differentiates and that matter exhibits excellent properties that are unique
to its size. The reason for these excellent properties arise from the high surface-
volume ratios of the nanoparticles forming the matter, the superiority of surface
energy interactions over mass and chemical energies, the interaction of light
with nano-scaled wavelengths and nanoparticles, and the interaction between the
particles. The better understanding and explanation of nano-scaled phenomenon
enables the development of many new equipment, new materials and nano-
scaled devices. These new opportunities opened up by nenotechnology have led
to the better understanding of the living systems for biological and life sciences
and the discovery of unique and strong properties of these systems. Thus,
nanotechnology allowed new advancements that can be important in the field of
medicine to be introduced in a rapid fashion. The applications of
nanotechnology in nanomedicine are categorized in three main topics: diagnostic
applications, drug delivery systems and implant and prosthesis applications
(Jahangirian et al. 2017).

~ 157 ~
4.3.1 Use of Nanomaterials in Diagnostic Applications
Nanodiagnostic technologies such as imaging at nano scale, nanoparticle
biolabels, biochips/microarrays, nanoparticle-based nucleic acid diagnoses,
nanoprotheomic-based diagnosis, biobarcod tests, DNA nanomachines,
nanoparticle-based immunity tests, nanobiosensors have become increasingly
popular in medical diagnosis applications (Bellah et al. 2012, Jackson et al.
2017, Baetke et al. 2015, Rajasundari and Ilamurugu 2011). As the cellular
components have sizes in nano-scal,the techological equipment that can track or
diagnose those molecules must also be in nano-sized. Protein nanobiochips and
nanofluid arrays can be given as examples of biochips and microarray devices
(Figure 4.3.1) that provide these opportunities. These chips can be designed in a
way to interact with cellular components that have high specificity. The most
important and most promising one of these nanofluid arrays is the devices that
performs analysis and isolation of special molecules like DNA. These
capabilities enabled the development of new detection applications for cancer.
Such device was possible with the construction of silicon nanowires that are
placed onto a substrate or chip, prepared by using standard photolitographic or
denudation techniques, following the formation of a chemical oxidation step that
transforms nanowires into hollow nanotubes. While these nanotubes, used for
biomolecule isolation purposes, have 50 nm diameter, it became possible to have
diameters as small as 10 nm (Bellah et al. 2012, Jackson et al. 2017, Baetke et al.
2015, Rajasundari and Ilamurugu 2011). The device for the identification of the
DNA molecules, which is designed based on the principle of changing electrical
current when the molecule enters the nanotube, is made from silicon nanotubes
that contain 2 parallel microfluid channels. The use of nanofluids in medicine is
a promising development for many clinical trials from personalized medicine to
pathogen detection and pharmaceutical development (Bellah et al. 2012, Jackson
et al. 2017, Baetke et al. 2015, Rajasundari and Ilamurugu 2011).

~ 158 ~
Figure 4.3.1. Microarray system (The image is published on
http://slideplayer.com/slide/4973622/ and retrieved from Google Images.)

After the electrophoresis and mass spectroscopy, which have been used for
many years in protein identification, the device developed for protein microarray
analysis and capable of identifying thousands of protein molecules within a very
short amount of time have gained great success in medicine (Figure 4.3.2). The
method is based on immobilization of various proteins, such as antibodies and
enzymes placed onto a glass surface, on a glass slide as an array (Bellah et al.
2012, Jackson et al. 2017, Baetke et al. 2015, Rajasundari and Ilamurugu 2011).
The sample to be analyzed on the glass slide is attached to the related antibody
on the chip and analyzed. When microarrays are enabled to identify even smaller
molecules with the rapid developments in nanotechnology, it is believed that
personalized treatments will increase as well.

~ 159 ~
Figure 4.3.2. Microarray analysis system (The image is published on
https://en.wikipedia.org/wiki/DNA_microarray and retrieved from Google
Images.)

Biosensors are devices that operate according to biochemical mechanisms


(Bellah et al. 2012, Jackson et al. 2017, Baetke et al. 2015, Rajasundari and
Ilamurugu 2011). Biosensors are consisted of two parts; one is the biological
part that is responsible for sampling and the other one is the physical part that
converts the input signals and outputs sampling results. The nanomaterials
termed nanosensors are sensitive chemical and biological sensors. Their ability
to detect differences in the volume, concentration, location, electrical and
magnetic forces, pressure and temperature of certain cells or regions in the body
makes them very important in the field of medicine. Quantum dots can be given
as nanosensor examples (Figure 4.3.3). The fluorescence characteristic of
cadmium selenide injected into the body enables the physician to easily see
diseased cells (like cancer cells) in the body. However, other quantum dots that
have toxic characteristics like cadmium selenide restricts the use of such
~ 160 ~
nanosensors. Therefore scientists maintain their studies to develop quantum dots
with fluorescence properties. Quantum dots are especially promising for the
detection of special DNA damages.

Figure 4.3.3. Quantum dots (The images are published on


https://www.indiamart.com/proddetail/inp-zns-quantum-dots-
17679035648.html, https://www.theguardian.com/science/small-
world/2013/aug/13/mother-nature-quantum-dots and retrieved from Google
Images.)

Just like quantum dots, nanoparticles are also being widely used for medical
detection and imaging (Bellah et al. 2012, Jackson et al. 2017, Baetke et al.
2015, Rajasundari and Ilamurugu 2011). The most striking examples of these are
gold and magnetic nanoparticles. Magnetic nanoparticles are used in magnetic

~ 161 ~
r4.3..esonance imaging (MRI) as well as for carriers in targeted drug delivery
(Figure 4.3.4).
Hyperthermia (Figure 4.3.4) is a fast developing method in cancer treatment.
This method takes advantage of the higher sensitivity of tumor tissue against
heat. Magnetic hyperthermia can minimize the side effects by only heating the
desired part of organism, including the tumors that are located deep inside the
body. Magnetic hyperthermia includes inserting the magnetic particles into the
desired part of the organism and remotely heating these with an alternate
magnetic field. Controlling the uneven heating in the tumor region with current
magnetic particles still presents some difficulties (Bellah et al. 2012, Jackson et
al. 2017, Baetke et al. 2015, Rajasundari and Ilamurugu 2011). This situation
can lead to local overheating and necrosis. The production of magnetic materials
with Curie temperature of 42-43°C will enable this technology to be safely used
in the treatment of tumors along with radiation therapy. The magnetic materials
that can raise Curie temperature up to 42-43°C are being developed by many
scientists.
Metallic nanoparticles like gold and silver are used for signal amplification in
many biodiagnostic device (Figure 4.3.5).

Figure 4.3.4. The usage of magnetic nanoparticles for diagnostic purposes (The
images are published on
http://braininbrief.tumblr.com/post/7302838178/burning-tumor-in-a-magnetic-
field, https://phys.org/news/2009-08-nanoparticles-blood-brain-barrier-enable-
brain.html and retrieved from Google Images.)

~ 162 ~
Figure 4.3.5. The use of gold nanoparticles in the biomedical field (The image is
published on http://braininbrief.tumblr.com/post/7302838178/burning-tumor-in-
a-magnetic-field and retrieved from Google Images.)

Gold nanoparticles are used in various optical and electrical tests. For example;
electrical properties of gold nanoparticles (Figure 4.3.6), development of a
piezoelectric biosensor for real-time detection of food-originated pathogen.

Figure 4.3.6. Use of gold nanoparticles for biosensor purposes (The image is
published on https://www.cd-bioparticles.com/t/Properties-and-Applications-of-
Gold-Nanoparticles_59.html and retrieved from Google Images.)

~ 163 ~
For centuries, elemental silver and silver salts (Figure 4.3.7) are known for their
curing and protecting abilities as antimicrobial agents in healhcare services. The
antimicrobial activity of silver salts and their complexes (ionic silver) are
generally based on the bonding of metallic ions in various biomacromolecule
compositions. Cationic silver targets and bonds negatively charged components
of proteins and nucleic acids, thus, they cause structural changes and
deformations in bacterial cell walls, membranes and nucleic acids. Silver ions
generally interact with a series of electron-donating functional groups such as
thiols, phosphates, hydroxyls, imidazols, indoles and amines. The ones that bind
to the cell surface components can interrupt bacterial respiration and adenosine
triphosphate (ATP) synthesis. In addition, silver ions were determined to block
the respiration chain of the microorganisms present in cytochrome oxidase and
nicotinamide adenine dinucleotide (NADH) succinate dehydrogenase. The
combination of silver nanoparticles with water soluble biopolymers will lead to
the production of new antimicrobials. Based on this, various natural polymers
such as acacia gum, starch, gelatin, sodium alginate and carboxy methyl
cellulose are used to prepare bio-compatible polymeric silver nanocomposites.
Chitosan is a natural polymer. It is the second most abundant structural
polysaccharide present in the nature after cellulose. Chitosan easily interacts
with bacteria and binds to DNA and most of the glycosaminoglycan and
proteins; thus increases the antimicrobial effect of silver nanoparticles (Bellah et
al. 2012, Jackson et al. 2017, Baetke et al. 2015, Rajasundari and Ilamurugu
2011).

~ 164 ~
Figure 4.3.7. The use of silver nanoparticles in biomedical applications (The
image is published on
https://www.cell.com/trends/biotechnology/fulltext/S0167-7799(16)00040-8 and
retrieved from Google Images.)

The nanobiotechnological studies in food contain mainly antioxidants,


antimicrobials, biosensors and packaging. Bio-based materials such as
renewable and biodegradable nanocomposite films of bionanocomposites have
significant potential in food packaging applications. Medicine, pharmaceutical
and cosmetics industries take advantage of food made nanoparticles to improve
the properties of their products (Bellah et al. 2012, Jackson et al. 2017, Baetke et
al. 2015, Rajasundari and Ilamurugu 2011).

4.3.2 Use of Nanomaterials in Drug Delivery Applications


The use of micro and nanoparticles in biomedicine and drug delivery studies
have many advantages over conventional systems. Enhanced drug delivery,
lower dose use in drug carrier systems, increased efficacy of drugs by protecting
them against the impact of biologic medium and minimizing the drug-related
side effects are among the most important advantages. Moreover, the fact that
developing a new medicine is far more expensive than improving the drug
carrier systems have led to even more increased use of drug delivery systems. In
addition to the emulsions, suspensions and liposomes that have been used for
~ 165 ~
many years, there is a growing interest towards nanosystems smaller than 100
nm in size (Suri, et al. 2007, Kawadkar et al. 2011, Emeje et al. 2012,
Jahangirian et al. 2017).
Liposomes (Figure 4.3.8) are small spherical sacs that one or more aqueous parts
are completely enclosed with molecules that have hydrophilic and hydrophobic
functions. Liposomes can be consisted of two multifold layers and their
compositions vary depending on the size, surface load and preparation method.
Liposomes are commonly used as model cells or carriers for various bioactive
agents like drugs, vaccines and cosmetic products (Suri, et al. 2007, Kawadkar et
al. 2011, Emeje et al. 2012, Jahangirian et al. 2017).

Figure 4.3.8. Structure of liposomes (The image is published on


https://en.wikipedia.org/wiki/Liposome and retrieved from Google Images.)

Liposome related drugs have significant pharmacokinetic superiority over free


drugs in solution. Liposomes are also effective in reducing systemic toxicity and
preventing the capsulated drug from early degredation after administration. They
can be coated with polymers such as polyethylene glycole (PEG). In such case
they are called pegylated or hidden liposomes. They exhibit long half life in the
blood circulation with that form. In addition, liposomes can bing to antibodies
and ligands to improve target specificity. Liposomes also function as carriers for

~ 166 ~
genes or DNA segments (Suri, et al. 2007, Kawadkar et al. 2011, Emeje et al.
2012, Jahangirian et al. 2017).
Nanoparticle drug delivery systems (Figure 4.3.9) are nanometric carriers that
are generally smaller than 1000 nm used for providing drug or biomolecule to
the required area. They can have many different shapes like sphere, capsule and
miscelle. Nanoparticle drug delivery systems can pass through the smallest
capillary vessels with their ultra small volumes and their time in the blood
circulation is relatively higher. This way they can reach to the target organs and
penetrate into the cells; they can be designed according to the properties of the
target organ or diseased cells, thus, they contribute to the elimination of side
effects as they enable decreased drug dose. Biomolecules such as polypeptides,
proteins, nucleic acids and genes can also be inserted into the drug carrier
system in addition to drugs. For example, while drug delivery term is used for
drug-load nanoparticles, gene delivery is used for a carrier loaded with a gene. In
recent years, nanoparticle drug delivery systems showed great potential in
biological, medical and pharmaceutical applications. Drug delivery studies with
nanoparticles are focused of the determination of carrier system combination
with the suitable drug release rate, determination of surface modification of
nanoparticles to have them reach the target organ, optimization of nanoparticle
preparation to provide suitable drug release, and determination of in vitro and in
vivo behaviors of synthesized nanoparticles (Suri, et al. 2007, Kawadkar et al.
2011, Emeje et al. 2012, Jahangirian et al. 2017).

~ 167 ~
Figure 4.3.9. Nanoparticle drug delivery systems (The image is published on
https://www.youtube.com/watch?v=TDvhVSXxnjw and retrieved from Google
Images.)
The nanoparticles used in drug delivery are solid colloidal particles that contain
various macromolecules they are adhered or covalently bonded, have sizes
varying between 1 nm and 1000 nm, and where therapeutic drugs can be
adsorbed to. Probably the most common ones are aliphatic polyesters like poly
(lactic acid) (PLA), more hydrophilic poly (glycolic acid) (PGA) and their
copolymers poly (lactide-co-glycolide) (PLGA). The degradation rate and drug
delivery rates of these polymers vary from days (PGA) to months (PLA) (Suri,
et al. 2007, Kawadkar et al. 2011, Emeje et al. 2012, Jahangirian et al. 2017). In
addition using dendrimers, which provide smaller particle size, as drug carriers
became one of the most studied fields of biomedical sciences for the last decade.
Starch, chitosan and gelatin are among the most common common natural
polymers studied as drug carriers. The efficacy of nanoparticles in drug
applications can vary based on many factors such as physical and biological
stability, well tolerability of the components, simplicity of the production
process, easy scalability of the production process, dry freezing and sterilization
steps (Suri, et al. 2007, Kawadkar et al. 2011, Emeje et al. 2012, Jahangirian et
al. 2017).
The future of drug delivery systems depends on increasing the specificity of
delivery systems for target cells, more sensitive adjustment of bioavailability of
active agents in the target tissue and equipping the carriers loaded with active
agents with properties that will increase their penetration ability into the cell.

~ 168 ~
4.3.3 Use of Nanomaterials in Implant and Prosthesis
Applications
Nanotechnology enables the formation of biocompatible and biodegradable
materials and systems for reconstruction or replacement of damaged tissues with
implants and prostheses. The reinvigoration of damaged tissues can be
encouraged with new, programmed cells and frameworks that trigger cell intake
and growth. However, there are many damages and conditions where tissue loss
cannot be changed by cell growth or living tissue transplantation. Medical
treatment of such cases require the use of advanced artificial materials. In many
treatments damaged teeth, bones or connective tissues are replaced with artificial
forms that are relatively similar in terms of functionality and characteristics.
Loss of neural or hormonal functions must also be tried to eliminated in
treatments that require equipment such as cardiac pacemaker or insulin device.
With today’s technology, vital organs such as kidneys and heart can be
temporarily replaced with artificial devices. However, recent nanotechnological
advancements enabled the research and rapid development of artificial limbs
(feet, legs, arms and hands etc.) as well. In addition, performance of prostheses
aimed for sensory functions (especially hearing and vision) are also rapidly
developing in recent years (Thakral et al. 2014, Torrecillas et al. 2009,Thomas et
al. 2014, Tibbals 2011).
Many implant and prosthesis (Figure 4.3.10) materials used in medicine are
expected to gain natural properties of their intended organs along with their
characteristics such as biocompatibility, biodegradability, superior mechanical
resistance, elasticity and porosity. Biomedical implants, devices and artificial
organs with desired properties can be produced using biomimetic designs thanks
to nanotechnology. Development of nanomaterials have been effective for the
production of such equipment. New materials are developed and existsing
natural materials such as biopolymers are subjected to surface modification
process especially for tissue repair and replacement (Thakral et al. 2014,
Torrecillas et al. 2009,Thomas et al. 2014, Tibbals 2011).

~ 169 ~
Figure 4.3.10 Examples of implants and prostheses (The images are published
on https://www.pinterest.com.au/pin/775322892070359551/,http://www.o-
tec.info/wordpress/prothesen/ and retrieved from Google Images.)

Nanotechnology provides control over properties such as heatable, porosity,


roughness, chemical affinity in order to optimize the interaction with proteins for
cell adhesion. Aluminum and titanium oxides, hydroxyapatite, carbon nanofibers
and nanotubes, titanium metal and various alloys, polymers, bioactive glass and
ceramic polymer composites are among the nanostructured implants (Figure
4.3.11). Nanotechnology methods help produce materials that are specialized for
compatibility with specific cells and tissues.

Figure 4.3.11 Examples of the development of prosthetics with nanotechnology


(The images are published on https://www.dutchcowboys.nl/technology/deze-
prothese-geeft-je-een-derde-duim-en-is-helemaal-
cool,http://fortune.com/2016/01/25/consumer-wearable-powering-next-gen-
prosthetics/and retrieved from Google Images.)

~ 170 ~
Intercellular medium of many tissues is characterized by collagen and elastin
nanofibers. Nanofiber materials produced from biocompatible polymers were
shown to attach to the cells and form new cells through in vitro and in vivo
applications.
Various nanotechnology methods and materials are used to mimic the nanofiber
intercellular medium of epithelial, bone and connective tissue. Today, methods
like electrospinning (Figure 4.3.12), thermally induced phase separation are
preferred for the production of nanofiber-structured framework to be used in
tissue engineering. The performance of these frameworks can be enhanced
through surface modification process to have them better mimic the cell growth
and tissue formation (Thakral et al. 2014, Torrecillas et al. 2009,Thomas et al.
2014, Tibbals 2011).

Figure 4.3.12. Electrospinning method (The image is published on


https://www.semanticscholar.org/paper/Electrospinning-protein-nanofibers-to-
control-cell-Nwachukwu/702a38e815c8ca018af8640994b126187bb8aa82 and
retrieved from Google Images.)

Combination of carbon nanotubes (Figure 4.3.13) provides structural


enhancement for other tissues like bone and also helps cell growth. They can
also be used to add some important properties such as electrical conductance to
the structure. In addition, using carbon nanotubes as neural growth frameworks
is also very promising (Figure 4.3.13).

~ 171 ~
Figure 4.3.13 Carbon nanotubes and body oriented application (The images are
published on https://phys.org/news/2015-03-carbon-nanotubes-polymers.html,
https://www.youtube.com/watch?v=7XarH4knurY and retrieved from Google
Images.)

Thin titantium dioxide nano films are produced as cellular structure framework
for their micro porosity. The reason to use titanium (Figure 4.3.14) is that it has
low toxicity and does not cause bleeding in the body. Electrochemical
techniques can be used to precisely control the size, intensity and film thickness
of titanium dioxide films. Their surfaces can be coated with biomaterials in order
to increase biocompatibility and bioactivity (Thakral et al. 2014, Torrecillas et
al. 2009,Thomas et al. 2014, Tibbals 2011).

Figure 4.3.14. Titanium dioxide prosthesis (The image is published on


http://nchsbands.info/new/titanium-joint-replacement.html and retrieved from
Google Images.)

The functionality of artificial pumps for cardiac assistance or insulin release,


kidney dialysis unites and other biomechanical organs or organ assistance units
depends on the success of tissue engineering and prostheses. Therefore, medical

~ 172 ~
and surgical robots are also indirectly affected by nanotechnology. With these
impacts, nanodevices and nanotechnologically activated microdevices increased
the performance and functionality of information technologies, communication,
sensor, actuators and controllers that are the core components of control
strategies and automatic feedback cycles necessary for both robotic and
advanced prostheses (Thakral et al. 2014, Torrecillas et al. 2009,Thomas et al.
2014, Tibbals 2011).
Prostheses include artificial devices developed for neural stimulation,
replacement of lost motor functions, replacement of lost sensory functions or a
combination of all three. Neural prostheses are an example for this. Neural
prostheses (Figure 4.3.15) have two types: motor and sensory. Sensory neural
prostheses are devices that convert the external stimulants like sound or light
into signals that are transferred to brain from neural paths directly or indirectly
and restore the damaged or lost sensual ability. Glasses and external hearing
instruments are prostheses, however, sensual neuroprostheses such as cochlear
implant or artificial retina are active devices that gives electrical stimuli to the
nervous system. Motor neuroprostheses devices receive signals from brain or
motor nervous paths and convert this information to the control of an actuator
device that is aimed at the purpose of the user. In order to fully interact with the
brain, a neuroprosthesis not only needs to receive signal from the brain but also
needs to send back a sensory information for feedback. This feedback is visual,
auditory, kinestetic and tactual (Thakral et al. 2014, Torrecillas et al.
2009,Thomas et al. 2014, Tibbals 2011).

~ 173 ~
Figure 4.3.15. Working principle of neural prostheses (The image is published
on http://neurotechzone.science/posts/874 and retrieved from Google Images.)

Summary
Restoration of tissue structure and function through artificial materials and
techniques are gaining new means through nanotechnology applications. When
nanomeasurement methods are applied, drug delivery systems, implants, tissue
engineering and prostheses approach each other. Interaction with a nano-scaled
tissue involves the signals of unique surface and energic nano-scale effects in
addition to biomolecular level.
Nanotechnology has significant impacts on nano-engineered bioactive materials
ofr implants, encapsulation of living cells, tissue implants for immune protection
and miniature and power engineering for prosthetic devices.
Preliminary studies before the efficacy and safety of many of these approaches
may be completed. However, results of these studies must be turned into practice
through clinical trials and experience.
~ 174 ~
References
Baetke SC, Lammers T, Kiessling F. (2015) Applications of nanoparticles for
diagnosis and therapy of cancer. British Journal of Radiology, 88: 20150207.
Bellah, Md. M., Christensen, S.M., and Iqbal , S. M. (2012) Nanostructures
forMedical Diagnostics, Journal of Nanomaterials, Article ID 486301, 21
pages.
Emeje, M.O., Obidike I.C., Akpabio, E.I. and Ofoefule, S.I. (2012) Recent
Advances in Novel Drug Carrier Systems: Chapter 4: Nanotechnology in
Drug Delivery, http://dx.doi.org/10.5772/51384.
Jackson, T.C., Patani, B.O. and Ekpa, D.E. (2017) Nanotechnology in
Diagnosis: A Review. Advances in Nanoparticles , 6, 93-102.
Jahangirian H., Lemraski, E.G., Webster, T.J., Rafiee-Moghaddam R.,
Abdollahi Y. (2017) A review of drug delivery systems based on
nanotechnology and green chemistry: green nanomedicine, International
Journal of Nanomedicine, 12, 2957–2978,
Kawadkar, J., Chauhan, M.K., Maharana, M. (2011) Nanobiotechnology:
Application of Nanotechnology in Diagnosis, Drug Discovery, and Drug
Development, Asian Journal of Pharmaceutical and Clinical Research, 4, 23-
28.
Rajasundari, K. and Ilamurugu, K. (2011) Nanotechnology and Its Applications
in Medical Diagnosis, Journal of Basic and Applied Chemistry, 1(2)26-32.
Suri, S.S., Fenniri, H., and Singh, B. (2007) Nanotechnology-based drug
delivery systems, Journal of Occupational Medicine and Toxicology, 2:16.
Thakral, G.K., Thakral, R., Sharma, N., Seth, J., Vashisht, P. (2014)
Nanosurface-The Future of Implants, Journal of Clinical and Diagnostic
Research, 8(5): ZE07-ZE10.
Thomas, B., Mathew C.A., Muthuvignesh, J. (2014) Nanotechnology-
Applications in Prosthodontics: A Literature Review, Journal of Orofacial
Research, 4, 103-110.

~ 175 ~
Tibbals H.F. (2011) Medical Nanotechnology and Nanomedicine, CRC Press
Taylor & Francis Group.
Torrecillas, R., Moya, J.S., Díaz, L.A., Bartolomé, J.F., Fernández, A., Lopez-
Esteban, S. (2009) Nanotechnology in joint replacement, WIREs
Nanomedicine and Nanobiotechnology, 1, 540-552.

~ 176 ~
4.4 TEXTILE APPLICATIONS

Evren ÇAĞLARER
ecaglarer@gmail.com
KIRKLARELİ UNIVERSITY

INTRODUCTION
Unique and new properties of nanomaterials and their economical potentials
draw the attention of not only scientists and researchers but also industries as
well.
Trends in the global textile industry, competitive prices, number of competitors,
and low profit margins make it hard for traditional textile sector to survive. In
order to survive under these conditions of global textile market, customers’
demands for the development of new product or materials must be met.
Although nanotechnology is still at is infancy, it offers a very promising and
bright future for textile in-dustries by improving the textile performance.
When the size of a material is reduced to nanometer range, it chang-es and gains
very different properties. Textile technology can utilize nanotechnology for
different areas from the application of special-ized textile products, medical
textile, flame-retardant properties to textiles that are suitable for environmental
and washing conditions, and convenient dyeing and finishing processes.
Textiles produced with the applications used for different nano fiber
manufacturing or finishing processes have very important and spe-cial behaviors
like air permeability, water repellent, resistance to creasing, wrinkle recovery,
flame-retardant, anti-static property, UV protection, waterproof, color changing
and anti-bacterial. In addition, “smart” clothes are produced by making the
textiles intelligent thanks to the developed material technologies.

Keywords: Nano textiles, Nano fibers, Smart textiles

~ 177 ~
Designed by Ali Tüydür

Source: (a)http://www.nanowerk.com/spotlight/sptid=42713.php#ixzz42LCmRRGH&i,
(b) https://tr.pinterest.com/pin/514958538628662382/?lp=true

4.4.1 Smart Textiles Produced with Nanotechnology


Smart textiles are designed and manufactured to incorporate tech-nologies that
give the user increased functionality. Fabrics with a wide variety of functions
that we could never have imagined can be obtained by using nanomaterials. The
water repellent feature pre-vents a shirt, which has a glass of juice poured on it,
or muddy pants, from getting dirty.

~ 178 ~
These textiles have many potential applications, such as communication with
other devices, energy management, conversion to other materials, and protecting
the user from environmental hazards. In recent years, research and development
for wearable textile based personal systems, health mon-itoring, protection and
security fields are drawing great attention. With the nanosensors on the T-shirt
we wear, we can check our heartbeats, body temperature and blood levels
regularly and in case of an un-wanted situation, sensors can inform us, or our
physicians via wire-less communication.

Source: https://www.hexoskin.com/blogs/news/tagged/wearable-technology

Adding different properties to the nano-scale materials used in textile leads to


significant developments. For example, coating the sock yarn with silver
nanoparticles will eliminate bacteria and microbes from the sock, hence, odors
will be prevented. Textiles produced from hydrophobic fabrics prevents dirt,
therefore, minimizes the washing and ironing needs. This way, water
~ 179 ~
consumption will be re-duced, or maybe even the washing machines will be a
history in some future.

Source: Bahir Dar University EiTEX Nano-Technology application in Textiles


By: Bademaw Abate

Color changing fabrics provide camouflage effect in the field and nano textiles
with UV-protection and high abrasion resistance used in extreme environmental
conditions are being utilized in military applications.
By implanting flexible and washable nanosensor and devices into the fabrics, the
clothes we use will gain whole new dimensions; they will be able to see, hear,
feel, command and generate energy.

Passive Smart Textiles


The first generation of smart textile provides additional passive properties to the
textile regardless of the changes in the environment. For example, a highly
insulated layer will maintain its level of insulation regardless of the external
temperature. Other examples are from a wide variety of fields from anti-
microbial, anti-odor to anti-static and bulletproof textiles.

~ 180 ~
Source: a) https://www.cnet.com/news/tommy-hilfiger-launches-solar-power-
jackets-to-charge-your-phone/ ,
b) https://www.polyu.edu.hk/ife/corp/en/publications/tech_front.php?tfid=314,
c) https://www.behance.net/gallery/28033423/Liquid-MIDI

Active Smart Textiles


Both actuators and sensors take part in the second generation. Tex-tiles that
automatically adapt their functions to the changing environ-ment are called
active smart textiles. Active smart textiles are elec-tronically heated suits made
of color changing (camouflage), water-resistant, vapor permeable (hydrophilic /
non-porous), heat storing, thermo-regulated, vapor absorbing fabrics that change
with heat.

Ultra Smart Textiles


Ultra smart textiles are third generation smart textiles that can spontaneously
detect, react and adapt to the environmental conditions or stimuli.
Ultra smart or intelligent textiles are composed of a unit that operates like a
brain and has cognitive, reasoning and activating capacity. With the successful
collaboration of traditional textile and cloth technology, material sci-ences with
scientific fields such as structural mechanics, sensor and actuator technology,
advanced processing technology, communication, artificial intelligence and
biology, ultra smart textile manufactur-ing is now a reality.

~ 181 ~
Source: https://www.nanowerk.com/spotlight/spotid=39169

New fiber and textile materials and miniature electronic components enable the
preparation of smart textiles, therefore, significantly useful smart clothes are
now being produced. These smart clothes are worn as ordinary clothes and vary
depending on the designed applications.

4.4.2 Nano Textile Production Methods


Products attained as a result of nanotechnology applications in textile are called
nano textiles. This definition refers to all textile surfaces obtained through
nanotechnological applications.
There are two main approaches in the manufacturing of nano tex-tiles.
a) Nanotechnology use during fiber and yarn production
b) Nanotechnology use for textile finishing process and sizing in
the material
~ 182 ~
4.4.3 Use of Nanotechnology during Fiber and Yarn Production
The building blocks of all natural and synthetic textile products are molecules.
These molecules are lined up to form fibers, and the fi-bers are used to attain
yarn. The permanent way of improving the usage performance of a fabric is
possible through the reinforcement of fibers inside the fabric at the molecular
level.
Nanomaterials are materials that have sizes approximately 100 nm and below
and exhibit unique properties due to their size. The nano-materials are classified
in two groups as organic and inorgan-ic.Inorganic nanomaterials are
nanostructures that are made up of other elements that do not contain carbon in
their structures. Organic nanomaterials are nanostructures that have a carbon
element in their composition. Nanomaterials are usually classified as
dimensional. According to this classification;
• 0-D nanomaterials (nanoparticle)
• 1-D nanomaterials (nanotube, nanowire, nanofiber)
• 2-D nanomaterials (nanofilm)
Nanofibers are thin fibers about one thousandth of a human hair and they have
average fiber diameters in the nanometer range (1 nm = 0.000000001 m).The
smallest nanofibers produced today are be-tween 1.5 nm and 1.75 nm. With their
2-600 nm diameter range, nanofibers cannot be seen with bear eye. Nanofibers
gain unique chemical and physical properties due to their very small sizes and
they can be used in very narrow and small areas.
The comparison of surface and volume of nanofibers reveal that that they have
very large surface areas. Since their high surface area enables chemical
reactions, nanofibers are very suitable for new technologies that require very
small mediums.
Normal textile fibers have 10000 nm or wider diameters. A typical nanofiber has
10 nm or wider diameter. In textile, specific surface area of fibers is
proportionate to the surface area. Fibers with nano sizes have 1000 times more
specific surface area than micro-scale textile fibers.
Its properties like flexibility, high porosity, small porosity, axial strength have
enabled nanofibers to have diverse and wide applica-tion fields. Nanofibers are

~ 183 ~
used in many fields from nanocatalyst tissue roof, protective textiles, filtration
and optics.
Nanofiber Production Methods

 Self assembly method


 Phase separation method
 Template synthesis method
 Bicomponent extrusion
 Drawing method
 Meltblown method
 Sspunbond method
 Electrospinning method

Source: https://www.euroresidentes.com/tecnologia/nanotecnologia/nuevos-tejidos-con-
nanotecnologia-que-se-limpian-solos

4.4.4 Nano Finishing Processes


Nano Emulsion
Emulsions are heterogeneous systems consisted of a fine dispersion of droplets
of at least two liquids, which are normally not soluble or miscible in each other.
These systems are consisted of hydrophilic and lipophilic phases. These two
phases are called the inner and out-er phases of the emulsion. The outer phase is
also defined as the continuous phase and the inner phase is carried in droplets. In
nano emulsions, droplets have a size distribution of 20-200 nm.
Textile surfaces gain various properties emulsion method. These are;
a) Oil and Water Repellent Finishing Processes,
~ 184 ~
b) Super Hydrophobic Finishing Processes,
c) Hydrophobic Finishing Processes,
d) Photocatalytic Self-Cleaning Finishing Process,
e) Antibacterial Finishing Processes,
f) UV Protection Finishing Processes,
g) Antistatic Finishing Processes,
h) High flash point.

Source: a) https://www.psfk.com/2013/03/color-changing-workout-clothes.html
b) https://www.instructables.com/id/How-to-Make-Thermochromic-Ink/

Water and dirt repellent properties are expected and in every area of life there is
an increasing demand for products that are waterproof but also breathable.As a
result of this research, waterproof and water repellent applications are combined
to produce water-repellent, breathable fabrics and textile products with sufficient
performance.
In the 1990s, hydrophobic nature was explained through the exami-nation of
micro structures of plant leaves (lotus plant etc.), which have very good water
repellent feature. Since then, as the possibili-ties of chemical technology have
increased, artificial hydrophobic surfaces have been developed and applied in
various fields.
In textile, it is important to protect fabrics from getting wet with liquid when
developing water or soil resistance. The presence of effective intramolecular
forces, such as polarity and hydrogen bonds, provides strength, thermal

~ 185 ~
resistance and dry cleaning resistance to the fabric. However, these forces cause
the outer garment products to show low resistance to snow and rain, and make
fibers to easily get wetting of the fibers with water. This problem can be solved
by chemically or mechanically coating the fabrics with various water repellent
chemi-cal substances. The water repellent compounds cover the outer sur-face of
the fabric with hydrophobic groups. These hydrophobic groups repel water
molecules by forming a low energy surface. The basis of water repellent process
is to form very thin hydrophobic membranes on the fibers. The water repellent
fabric provides protec-tion at a certain level against rain. However, during
prolonged and heavy rain, the water enters inwards through the open pores. A
water repellent surface also allows water vapor to escape. The removal of the
water vapor ensures that the fabric is more comfortable than the fabric with a
completely coated surface.
The materials used in water repellent finishing processes can be classified as
follows:
a) Water repellent substances that form resin,
b) Fatty acid + chromechlorine complex,
c) Paraffin and wax emulsions,
d) Organic Silisium Compounds (Silicons),
e) Fluorocarbon

Source: http://colourchangingink.com/

Super Hydrophobic Finishing Processes


If a water droplet tends to stop in a spherical shape on a surface, this surface is
called a hydrophobic surface. Here "hydro" means water and "phobos" means
fear; that is, the hydrophobic word means “wa-ter dislike”. The amount of

~ 186 ~
wetting of a solid by a liquid is measured by the contact angle. If the contact
angle is less than 90 degrees, the surface can get wet (hydrophilic) and if it is
greater than 90 degrees, the surface cannot get wet (hydrophobic).If the surface
has a high wetproof feature, meaning that the contact angle is close to 180 de-
grees or greater than 150 degrees, it is called a superhydrophobic surface.
The results of scientific studies on the development of superhydro-phobic
surfaces are promising for various industrial and engineering applications. In
particular, with the advancements in coating technol-ogy, self-cleaning, non-
staining, antimicrobial surfaces etc. can be attained. Polymeric coatings, nano-
structured silica, or surfaces con-taining metal oxides are synthesized to achieve
these properties.

Waterproof Paper: Paper doped with nanoparticles is rendered antimicrobial,


water-proof and even
magnetic.(https://www.popsci.com/technology/article/2012-04/nanoparticle-
coating-makes-plain-paper-magnetic-and-waterproof)

Silica coatings are carried out by forming nanosized silica beads on the surface.
Similarly, superhydrophobic properties can be achieved by forming nanosized
metal oxide rods on the surface.
One of the rapidly developing branches in the design of superhydro-phobic
surfaces is the formation of water-repellent fabrics. Water repellent properties
are used in a wide range from tent cloths, work clothes, umbrellas to surgical
personnel uniforms. In addition, thanks to the superhydrophobic and
antimicrobial fabrics used in medical wear, the risk of infection during surgery is
reduced.

~ 187 ~
Photocatalytic Self-Cleaning Finishing Process
Photocatalysis, the catalysis under light, is getting great interest from the science
communities. Photocatalytic activity of TiO2 was re-vealed by coincidence
during a study by Fujishima et. al to attain hydrogen from water with the impact
of UV rays in an environment containing TiO2 particles. This process is known
as “Honda-Fujishima” effect and it marks the beginning of photocatalysis
history. When the results were successful, the idea that TiO2 would de-compose
organic molecules had emerged and it was started to be used for this purpose.

Hidrophibic Finishing Processes


A.In 1972, Fujishiama and K.Honda built an induced water section-ultraviolet
light mechanism by using an electrode counter immersed in an electrolyte
solution and a titanium dioxide photoanode and they announced it before the
1973 petrol crisis. This mechanism enabled the conversion of solar energy into
other forms of energy by using semi-conductors or similar substances. Such
reactions are commercially used in environmental cleaning through
photocatalytic oxidation of organic compounds with TiO2 powder and coating.

Hydrophobic Finishing Processes


These water-loving structures act in an opposite way of hydrophobic structures.
Contact angle of water and inorganic materials on glass is 20-30 degrees.
Contact angle of water with silicon resin or known hydrophobic polymers such
as fluorocarbon polymers are 70-90 degrees, respectively, and generally it is
greater than 90 degrees. There are very few materials that have less than 10
degrees of contact angle with water. These materials have low durability and
their low contact angles cannot be maintained for prolonged times.

Source: a) https://genesisnanotech.wordpress.com/2014/12/10/nano-coatings-
for-textiles-and-nonwovens-the-future-is-now/
b)https://dornob.com/water-repelling-shirt-fabric-laughs-in-the-face-of-
moisture/#ixzz2qBTxTnzq&i

~ 188 ~
Antibacterial Finishing Processes
In terms of their structures and areas of use, textile products provide a medium
with suitable temperature, humidity and food for microorganisms to live and
grow. Microorganisms within the textile structures can harm the textile product
and its user. Textile products that have antimicrobial properties added help to
reduce and eliminate the effects caused by the microorganisms. These product
groups are used to hinder microorganisms, to keep infections under control, to
prevent odor, staining and color changing caused by microorganisms and to
protect against quality loss.
The most common active substances used in antimicrobial applications are
triclosan, quaternary amonium salts and metals (silver, copper, zinc etc) .In
additions to these, there are other studies being conducted about the use of many
active substances such as halamine derivatives, chitosan.
Antimicrobial textiles are produced through the integration of active substance
into the fiber (by adding into the polymer solution during fiber spinning or
through applique after the fiber spinning) or by directly applying into the textile
product. Conventional extrusion and soaking methods are widely used for
infusing antimicrobial chemicals to the fabrics made of natural and synthetic
fibers with the finishing process. Spraying and coating methods can also be used
for the application of antimicrobial chemicals.

UV Protection Finishing Processes


The most important function of UV protective clothes is to protect the user
against weather conditions and harmful effects of the sun. Zinc oxide, titanium
dioxide, silicon dioxide and aluminum oxide are the most common nanoparticles
used to add UV protection feature to the textile materials. These nanoparticles
provide protection either by absorbing or reflecting the harmful UV rays. UV-
protective nanotextiles are especially used in terrain clothes, curtains, outdoor
products, canopy, tents, outdoor paints.

Antistatic Finishing Processes


Synthetic textile is prone to static charging due to less water absorption. Nano-
sized TiO2, ZnO whisker, nano antimon doped tin oxide and silane nanosol can
add antistatic properties to synthetic fibers.TiO2, ZnO and TiO2 nanoparticles are
electrically conductive materials and this helps to distribute the static charge.

~ 189 ~
W.L.Gore and Associates, used nanotechnology to develop a antistatic
membrane for protective clothes. Goretex ® antistatic is a multifunctional textile
that protects the user against electrostatic discharges, weather, heat and flame.
Electrically conductive nanoparticles are homogeneously fixated in the fibers of
the Goretex
membrane and form a durable and electrically conductive mesh that prevents the

static charge accumulation.


Source:http://www.emeraldinsight.com/journals.htm?articleid=875516&show=html.

High Flash Point


These textiles are especially important for the fields involving high temperature
operations.
The most important high flash point materials can be categorized in 3 classes.
These are phosphorus and halogen-based basic high-flash point substances,
synergistic materials that have low high-flash point effect when used alone but
manifest increased activity when used together with high-flash point substances
(nitrogen with phosphorous, antimone with halogens), and high-flash point
substances that reveal their activity with physical effects (borate, aluminum
trihydrate, calcium carbonate etc.) (Schindler and Hauser, 2004; Chivas et al.,
2009; Brancatelli et al., 2011).

Nanocoating
Nano coating methods are basically classified as;
a) Self Assembly Nano Coating,
b) Plasma Polymerization Based Nano Coating,
c) Sol Gel,
d) Layer by Layer Nano Coating.

~ 190 ~
Source: http://www.techphlie.com/2016/03/smart-clothingthat-will-adjust-
itself.html

Nanolayer coating and self assembly process:


In contrast with traditional processes, nanolayer coating is a brand new coating
technology for textile industry. Nanolayer coating approach is based on the
formation of a single layer with a depth of one nanometer on the lower layer
inside the chemical molecular formula of self-assembling single layers.
Additional layers can be integrated within another one nanometer depth. Coating
thickness, smoothness and intensity are the most important characteristics of
nanolayer coating. This method is based on the principle of coating by
continuously forming thin layers with equal intensity on the fabric. With the
production methods involving plasma, ion cluster and chemical applications,
electrolytes and nanoparticles of various layers may have different
characteristics than the upper nanolayer depending on the necessary
functionality for the chemical precipitation.

Nanolayer coating in textile also has self assembly properties. In case some
chemical molecules in the upper layer tear accidentally and unravel, other
molecules move to fill and cover the gap created by that tear. Those molecules
can also move back. Or molecules provide electrostatic neutralization from their
layer to the other nanolayers. This self assembly process occurs as an
electrostatic effect throughout nanolayers. Ongoing research studies develop
~ 191 ~
multi-spectral camouflage materials with indium-tin-oxide minerals used for
ceramics via nanolayer coating in textile.

Nano Composite Coating


Since filling particles have nano sizes, nanocomposites have high area/volume
ratios. Even the very low-density nanoparticles added to the polymer matrices
lead to significantly increased physical and mechanical properties in polymers.
Polymers are widely used in nanocomposite production because of their ease of
process, mechanical behaviors, flexible structures and low densities.
Polyvinylchlorine, polyurethane, polytetrafluoroethylene, polyvinyl alcohol
polypropylene, polyethylene, polyamide and polyester derivatives are among the
most commonly used thermoplastic polymers used in nanocomposite production.
 Clay Based Composite Coating
 Silicon Based Composite Coating

 Carbon Nanomaterial Based Nano Coating

Source: a) http://gurmezin.com/30-usage-areas-for-super-material-graphen/
b) https://www.nanowerk.com/spotlight/spotid=42713.php

Nano Dyeing
Although nanotechnology is involved in every aspect of textile chemical
processing, dyeing processes are relatively untouched. In order to meet
increasing customer demands and have a share in the market, to produce
multifunctional and versatile textile products, nanotechnology must be
introduced to the dye works. In addition, since the nanostructure and surface
~ 192 ~
functionality can be integrated to the fabric by using dry techniques, nanotech-
nanotechnology can also reduce water consumption. Traditional dyeing
techniques applied to textile (dyeing, stain protective, flame retardant,
antibacterial applications) generally produce too much waste water due to the
wet-chemical process steps.

~ 193 ~
4.5 ENVIRONMENTAL APPLICATIONS

Serpil EDEBALİ
serpilcetin@gmail.com
İmren HATAY PATIR
imrenhatay@gmail.com
Gülşin ARSLAN
garslan@selcuk.edu.tr
Mustafa ERSÖZ
mersoz@selcuk.edu.tr
SELÇUK UNIVERSITY
INTRODUCTION
As the industrial and domestic use of nanoparticles increase, such nanomaterials
are being released to the environment. Nanoparticles can be used to monitor and
eliminate the environmental problems, wastes from various sources can be
prevented and production systems with less waste can be developed. .

4.5.1 Use of Nanoparticles


Nanotechnology has an important role in chemistry industry:
- Nano-scale catalyst materials in catalysis process,
- Porous materials used in petroleum industry,
- Lightweight materials used in automotive industry,
- More economic engines that have less fuel consumption and
lower environmental pollution engines,
- Using nanotechnology products like inorganic clay and polymer
in tyres instead of black carbon, producing environmentally
friendly tyres,
- Using nanorobots and smart systems for filtering and controlling
nuclear wastes.
Mobility of nanoparticles, reaction rates, toxicity for environment and
environmental residence time must be examined for the evaluation of the risks
involved in the release of nanoparticles to the environment [Ripp and Henry,
2011; Zhuang and Gentry, 2011].

~ 194 ~
Environmental applications of nanotechnology can be reviewed in three
categories:
1. Sustainable products (eg, green chemistry or pollution prevention),
2. Treatment of materials contaminated by hazardous substances and
3. Sensor applications [Tratnyek and Johnson, 2006].

4.5.2 Sustainable Products


Nanoparticles are being used in environmental protection applications such as
treatment, water purification, environmentally friendly packaging and oil
absorbents.
As the concentration of nanoparticles in underground waters and soil due to
industrial applications, environmental risks can reach to significant levels
[Golobic et al, 2012; Masciangioli and Zhang, 2003]. Nanofilters can be used to
obtain clean water, which will become a vital need in future. Large surface
areas of nanoparticles play an important role in solid/water ratios of pollutants.
Pollutants that are absorbable, or pollutants that can co-precipitate during the
formation of nanoparticles or the aggregation of nanoparticles where pollutants
are adsorbed will lead to the absorption of pollutants by the nanoparticle surface.
Interaction of pollutants with nanoparticles depend on the characteristics and
properties of the nanoparticles such as size, composition, morphology, porosity,
aggregation/disintegration and aggregate structure. Luminophores are not safe
for environment and the oxygen in the environment can be protected by placing
these inside a silica mesh [Swadeshmukul ve ark., 2001].
Removal of heavy metals from natural waters: Mercury, lead, thallium, cadmium
and arsenic are very important due to their adverse effects on environment and
human health. Superparamagnetic iron oxide nanoparticles are effective sorbent
materials.
Currently, there is no analytical method developed that can measure the trace
amount of nanoparticle concentrations. Therefore, industrial nanoparticles
released to environment cannot be measured [Mueller and Nowack, 2008].
Photodegradation of nanoparticles is a common application and many
nanomaterials are used for this purpose. Rogozea et al. (2017) used NiO/ZnO
nanoparticles in the modification of silica for photodegradation purposes. Large
surface area of very small nanoparticles (<10 nm) made effective

~ 195 ~
photodegradation reaction [Rogozea et al, 2017]. In another study, various nano-
nanoparticles were synthesized and optical, fluorescence and degradation
applications of these nanoparticles were reported [Olteanu et al, 2016a, 2016b;
Rogozea et al, 2016].

Water treatment devices


Water treatment by nanoparticles uses nanomaterials like carbon nanotubes and
aluminum fibers for nanofiltering [Qu et al, 2013]. Use of nanofilters allows to
apply lower pressure for water to pass through the filter. Although nanotubes
have smaller filter pores, they have smooth inner structure, therefore, water can
easily flow. Filtration becomes more effective; they have larger surface areas
and they are easier to clean. Nanofilters can clean the precipitates, chemical
wastes, charged particles, bacteria and other pathogens such as virus from the
water. In addition, they can also clean toxic trace elements like arsenic and
viscous liquid contamination such as oil.

Oil absorbents
Oil spill into the sea water harms the environment. This harm can be prevented
by using aerogels that are modified with water repellent molecules to improve
the interaction with oil. Aerogels have a significantly large surface area and they
can easily clean the oil by absorbing it just like a sponge.

Biodegradable plastics
Widespread use of plastic packages has an adverse environmental impact.
Biopolymers, which are used to produce environmentally friendly packaging, are
natural polymers that have many disadvantages like low humidity barrier
properties and poor mechanical properties. After the addition of nanoparticles
into biopolymers, material with better mechanical and barrier properties is
prepared and a completely biodegradable, environmentally friendly composite
material is produced [Rogozea et al, 2016].
Nanoparticles used as sustainable products are summarized in Table 4.5.1 [Qu et
al, 2013].

~ 196 ~
Table 4.5.1 Use of nanoparticles in sustainable products [Qu et al, 2013]

Applications Nanoparticles
Properties of nanomaterial
used
Adsorption Carbon nanotube Large surface area
Metal oxides Large surface area
Nanofibers with Selective adsorption ability
core-shell structure
Membranes Nano zeolites Molecular screen, water
retention capacity

Nano Ag Strong anti-microbial activity

Anti-microbial effect, small


Carbon nanotube pores, high mechanical and
chemical stability

High selectivity
Aquaporin

Nano TiO2
Photocatalytic effect, high
chemical stability
Nano magnetite
Superparamagnetism

Photocatalysis Nano TiO2 Photocatalytic activity to UV and


visible light, high stability

Photocatalytic activity to sun


Fullerene
light, high stability
derivatives

~ 197 ~
Disinfection and Nano Ag Strong and broad-spectrum anti-
microbial control microbial activity, easy use, low
toxicity

Anti-microbial activity, fiber


Carbon nanotube shape, conductive

Photocatalytic effect, high


Nano TiO2 chemical stability, low toxicity
and cost
Sensitivity and Quantum dots Stable emulsion depending on
monitoring particle size and chemical
composition

Noble metal
High conductivity, stable surface
nanoparticles
modification

Dye-added silica High sensitivity and stability


nanoparticles

Carbon nanotubes
Large surface area, high
mechanical and chemical
stability
Magnetic
nanoparticles
Modifiable surface chemistry,
superparamagnetism

4.5.2 Treatment of Materials Contaminated with


Hazardous Substances
Low-cost treatment techniques have great difficulties in the development of
sufficient treatment methods that protect the environment. The important
substances studied for soil, sediment and underground water treatment are heavy
metals (mercury, lead, cadmium etc) and organic compounds (benzene,
chlorinated solvents, creosote, toluen etc). Control and design of substances at
molecular level report the increased trend, capacity and selectivity of pollutants.
Reducing the amount of hazardous substances released to water and air, and

~ 198 ~
decreasing the related exposure are among the targets of environment protection
agents. In this regard, nanotechnology plays a very important role in the
prevention of pollution.
Treatment of contaminated underground water by using nanoparticles, one of
many environmental applications of nanotechnology, is an example of the
important benefits of this fast-developing technology. Therefore, main
environmental application of nanotechnology is related to the water industry.
Especially the decrease in fresh water resources due to excessive consumption
and contamination led to the idea of utilizing sea water as potable water. Many
water resource around world contain too much salt for human consumption, and
the desalination process used to remove salt from water is too expensive;
however, using carbon nanotube membranes can partially lower this cost.
Similarly, nanofilters can be used to treat or clean underground or surface water
that are contaminated with chemicals and hazardous substances. Nanosensors
are also being developed for the detection of pollution transmitted via water.
TiO2 nanoparticle use as photocatalyst in water treatment has become the focal
point of many researchers in recent years. Nanoparticles like light-activated
broadband semiconductor titanium dioxide (TiO2) and zinc oxide (ZnO), are
frequently used since they can remove organic pollutants from various
environments. These nanoparticles have advantages like easy accessibility, low-
cost and low toxicity. Semiconductor properties of TiO2 are required to remove
different organic pollutants by stimulating TiO2 using a light energy that is
greater than titanium oxide’s band range. This property can be used in different
reduction processes on semiconductor/solution intermediate surface. Pollutants
such as nitrobenzene, phenol, 4-chlorophenol, paration, toluen, benzene, as well
as dyes like methyl organe, rodamine B, basic dye can be removed by these
nanoparticles.
Zerovalent iron nanoparticles come to the fore with their effective use in
reduction and immobilization of heavy metals (Cr(VI) and Pb(II)) and
nucleotides. Especially, they can be used in soil, sediment and solid wastes as
well as water, waste water and gas process flows. Zerovalent iron nanoparticles,
which can be used for nitrate, perchlorate, chloride and humic acid, can also
oxidize organic materials in the presence of oxygen. Bimetalic nanoparticles
such as Pd/Fe or Ni/Fe can also be used for the removal of chloride compounds.
In addition, porous materials with large surface area, such cellulose beads loaded

~ 199 ~
with iron oxyhydroxides, can be used to remove heavy metals from aqueous
systems.
In other studies, composites of iron oxide and silicate were synthesized to reduce
azo dyes. Clays are modified with inorganic and organic compounds, acids and
bases to increase the sorption capacity. Organoclays become one of the most
interesting sorbents with their composite with polymers to remove heavy metals
from aqueous systems.
Especially nanotubes have an exceptional potential for their properties such as
high thermal and electrical conductivity, high strength, rigidity and special
adsorption capabilities.
Another example for environmental purification and treatment application of
nanomaterials includes dendritic nano-scale chelation agents for polymer
supported ultrafiltration. Dendrimers, with their controlled content and its
structure with nano scale properties, are designed to encapsulate zerovalent
metals so that they can dissolve in a suitable medium or bond to suitable
surfaces (Mansori et al, 2008).
Air pollution is another field where nanotechnology offers promising impacts.
Filtration techniques similar to water treatment methods can be used to clean the
air volumes of indoors. Nanofilters can be applied to exhaust outlets of cars and
factory funnels to separate pollutants and prevent them from being released to
the atmosphere. Lastly, nanosensors that can detect even the slightest
concentrations of toxic gas leakages are developed. Generally, nanotechnology
has many promising applications.
More extensive environmental impacts must be considered as nanotechnology
keeps advancing in many different aspects. These should include the models
required to determine the potential benefits of reduction or prevention of
industry-originated pollution. Nanotechnology has a great potential to improve
water and waste water treatment for enhancing the treatment efficiency and
increasing water supply through safe use of noncommercial water sources.
Candidate nanomaterials in this regards are known to be superior to existing
processes with their properties and advantages provided by the application
mechanism.

~ 200 ~
4.5.3 Sensor Applications
Different and unique chemical and physical properties of nanoparticles in
comparison to bulk materials make them extremely convenient for new and
advanced devices, electrochemical sensors and biosensors. Many nanoparticles
such as metal, oxide and semiconductor nanoparticles are being widely used for
the design of electrochemical sensors and biosensors and they play different
roles in different detection systems. Fundamental functions of nanoparticles in
these systems; immobilization of biomolecules, catalysis of electrochemical
reactions, increasing the electron transfer between electrode surfaces and
proteins, labeling of biomolecules, and even their ability to behave as a reactant.
These functions and nanoparticles used for these purposes are summarized in
Table 4.5.2 [Luo, 2006].

~ 201 ~
Table 4.5.2 Different functions of nanoparticles in electrochemical sensor
systems [Luo,2006]

Different

Sensor Advantages
functions of
nanoparticles in
Used Property

Nanoparticles
electrochemical

References
sensor and
biosensor
systems

Immobilization Biocompatibility Metal Increased [Zhuo,


of biomolecule nanoparticles stability
2005]

Catalysis of Large surface (Au,Ag) Increased [Fiorito,


electrochemical area sensitivity
2005]
reactions and
selectivity

Increased High surface Oxide Increased [Xiao,


electron transfer energy Nanoparticles sensitivity
2003]
between
electrode
surfaces and
proteins

Labeling of Conductivity, (SiO2, TiO2) Increased [Cai,


biomolecules Small sizes sensitivity,
2003]
Indirect
quantification

Reactant-like Small size and Metal New reaction [Xu,


behavior Modifiability nanoparticles mechanism
2005]

~ 202 ~
Summary
 Nanoparticles can be used to monitor and eliminate the environmental
problems, wastes from various sources can be prevented and production
systems with less waste can be developed.
 Nanoparticles as sustainable products: Nanoparticles are being used in
environmental protection applications such as treatment, water
purification, environmentally friendly packaging and oil absorbents.
 Nanotechnology has a great potential to improve water and waste water
treatment for enhancing the treatment efficiency and increasing water
supply through safe use of noncommercial water sources.
 Another environmental application of nanomaterials is sensors.
Fundamental functions of nanoparticles especially in electrochemical
sensors and biosensors; immobilization of biomolecules, catalysis of
electrochemical reactions, increased electron transfer between electrode
surfaces and proteins, labeling of biomolecules, and even their ability to
behave as a reactant.

~ 203 ~
References
Cai, H. Xu Y., Zhu N. N., He P. G., Fang Y. Z., 2002, An electrochemical DNA
hybridization detection assay based on a silver nanoparticle label,
Analyst,127, 803-808.
Fiorito P. A., Goncales V. R., Ponzio E. A., de Torresi S. I. C., 2005, Synthesis,
characterization and immobilization of Prussian blue nanoparticles. A
potential tool for biosensing devices, Chem. Commun.,0, 366-368.
Golobic M., Jemec A., Drobne D., Romih T., Kasemets K., Kahru A., 2012. Upon
exposure to Cu nanoparticles, accumulation of copper in the isopod
Porcellio scaber is due to the dissolved cu ions inside the digestive tract.
Environ. Sci. Technol. 46, 12112–12119.
Luo X., Morrin A., Killard A. J., Smyth M. R.,2006, Application of Nanoparticles
in Electrochemical Sensors and Biosensors, Electroanalysis, 18(4), 319–
326.
Mansoori G.A., Rohani.Bastami T., Ahmadpour A., and Eshaghi Z. 2008,
Environmental Application of Nanotechnology, Annual Review of Nano
Research, Vol.2, Chap.2, Pages 1-73.
Masciangioli T., Zhang W.X., 2003, Peer reviewed: environmental technologies
at the nanoscale. Environ. Sci. Technol. 37, 102A–108A.
Mueller N.C., Nowack B., 2008, Exposure modeling of engineered nanoparticles
in the environment. Environ. Sci. Technol. 42, 4447–4453.
Olteanu N.L., Lazar C.A., Petcu A.R., Meghea A., Rogozea E.A., Mihaly M.,
2016a, ‘‘One-pot” synthesis of fluorescent Au@SiO2 and SiO2@Au
nanoparticles. Arab. J. Chem. 9, 854–864.
Olteanu N.L., Rogozea E.A., Popescu S.A., Petcu A.R., Lazar C.A., Meghea A.,
Mihaly M., 2016b, ‘‘One-pot” synthesis of Au–ZnO–SiO2
nanostructures for sunlight photodegradation. J. Mol.Catal. A: Chem.
414, 148–159.
Ripp S., Henry T.B. (Eds.), 2011, Biotechnology and Nanotechnology Risk
Assessment: Minding and Managing the Potential Threats around Us,

~ 204 ~
ACS Symposium Series. American Chemical Society, Washington, DC,
DC. http://dx.doi.org/10.1021/bk-2011-1079.
Rogozea E.A., Olteanu N.L., Petcu A.R., Lazar C.A., Meghea A., Mihaly M.,
2016, Extension of optical properties of ZnO/SiO2 materials induced by
incorporation of Au or NiO nanoparticles. Opt. Mater. 56, 45–48.
Rogozea E.A., Petcu A.R., Olteanu N.L., Lazar C.A., Cadar D., Mihaly M., 2017,
Tandem adsorption-photodegradation activity induced by light on NiO-
ZnO p–n couple modified silica nanomaterials. Mater. Sci. Semicond.
Process. 57, 1–11.
Qu X., Alvarez P.J.J., Li Q., 2013, Applications of nanotechnology in water and
wastewater treatment, water research, 47, 3931-3946.
Swadeshmukul S., Peng Z., Kemin W., Rovelyn T., Weihong T., 2001,
Conjugation of biomolecules with luminophore-doped silica
nanoparticles for photostable biomarkers. Anal. Chem. 73, 4988–4993.
Tratnyek P.G., Johnson R.L., 2006, Nanotechnologies for environmental cleanup.
Nano Today 1, 44–48.
Xiao Y., Patolsky F., Katz E., Hainfeld J. F., Willner I., 2003,
"PluggingintoEnzymes": nanowiring of redoxenzymesby a
goldnanoparticle, Science 299(5614), 1877-1881.
Xu J. J., Zhao W., Luo X. L., Chen H. Y., 2005, A
sensitivebiosensorforlactatebased on layer-by-layerassembling MnO2
nanoparticlesandlactateoxidase on ion-sensitivefield-effecttransistors,
Chem. Commun. 0, 792-794.
Zhuang J., Gentry R.W., 2011, Environmental application and risks of
nanotechnology: a balanced view. pp. 41–67.
Zhuo Y., Yuan R., Chai Y. Q., Tang D. P., Zhang Y., Wang N., Li X. L., Zhu Q.,
2005, A reagentlessamperometricimmunosensorbased on
goldnanoparticles/thionine/Nafion-membrane-
modifiedgoldelectrodefordetermination of α-1-fetoprotein, Electrochem.
Commun. 7(4), 355-360.

~ 205 ~
4.6 MILITARY APPLICATIONS

Meltem BALABAN
mltm.blbn@gmail.com
PAMUKKALE UNIVERSITY
INTRODUCTION
Military applications of nanotechnology can be categorised according to their
areas of utilisation. These areas may be soldiers, information processing,
weapons/countermeasures, and platform systems such as land vehicles, naval
vessel, aeroplanes. This chapter focuses primarily on the protection,
performance and survivability of the Soldier, which is one of the main platforms
of combat systems. (Some of the protection and survivability applications may
be used by civilians in conditions of warfare.) This focusing approach is selected
for emposing usage of nanotechnology for the good of humankind. Besides, the
weapons/countermeasures utilization area requires high level of military
expertise in adaptation of requirements specifications that are unique for every
different country, within the frame of international norms and regulations. The
Soldier could benefit a great deal from nanotechnologies. The Soldier system is
also discussed, also due to the fact that it is connected to other platform systems
(land vehicles, UAV's-unmanned air vehicles), wireless sensor network system,
the logistic supply chain, and medical applications.

4.6.1 Soldier Nanotechnologies


Soldier nanotechnologies is a multidisciplinary field of study of nanotechnology
in military applications. Institutions working in this field conduct basic research
to create new materials, devices, processes, and systems, and development of
practical products useful to the Soldier. They can give guidance on Soldier
protection and survivability needs, and the relevancy of research proposed to
address these needs. Army and industry partners of these institutions share their
expertise of transforming fundamental research into practical products that are
compatible with other Soldier technologies, and which can be manufactured
most appropriately for Soldiers. Such multidisciplinary collaborations’ products
for Soldiers may be used by firefighters, police officers, other first responders,
and, indeed, the civilian community at large. One of the main institutions of
research and development, consisting of academia, armies, and industry,
working on soldier nanotechnologies is the ISN-Institute for Soldier

~ 206 ~
Nanotechnologies (MIT-Massachusetts Institute of Technology, army and in-
industry partners) in United States. Strategic research areas of the institution are
Soldier protection, battlefield care and sensing, augmenting situational
awareness, and transformational nano-optoelectronic Soldier capabilities.

Soldier “Nanosuit”
The Soldier nanosuit should have lightweight fabrics, should perform multiple
tasks while protecting the Soldier and keeping the Soldier comfortable. Some
features of the Soldier nanosuit fabrics would be nanoscale coatings, core-shell
and rod-rod nanostructures, carbon nanotubes, nanofibers, and layered and
membrane structures. The Soldier nanosuit should be able to sense chemicals
and identify their properties. Its fiber could blend together as a camoflauge. It
should serve as an armor, for example, a lightweight polymer armor that has
high power density, large and fast contraction capabilities, and high strength.
The suit should be thin but extremely strong, feeling just like a second skin to
the Soldier. The suit should be able to cure a Soldier and help in recovering from
injuries much faster. In order to provide these functionalities, the suit should be
able to deliver drugs and vaccines, heal injuries and wounds, and perform first-
aid type operations (such as making tourniquet,etc.). It should withstand extreme
temperatures, explosions, and should provide ballistic protection. Figure 4.6.1.a
and Figure 4.6.1.b show components of the future Soldier suit. In almost all of
the components, nanotechnology can be used in order to get more efficient
usage.
As can be implied from the “Future Soldier” figures, nanotechnology
research&development studies that emerge for Soldier protection, battlefield
care, and sensing are:

 novel nanomaterials,
 molecular nanocomposites,
 rapid hemostasis for the treatment of incompressible wounds, and
 Future vaccines & immunotherapies with nanotechnology-based adjuvants.

~ 207 ~
Figure 4.6.1 (a) - The Future Soldier (Source US Army,CNBC),
(b) The Future Soldier (Source:
https://interestingscience1.wordpress.com/2016/06/)

For augmenting situational awareness, the following nanotechnology studies


should be conducted:

 Mid& long-wave ınfrared detector arrays on flexible substrates,


 Particulate fluid fiber processing for fabric communications,
 Nano-plasmonics(control of light at the nanoscale) for Soldier applications.
Some nanotechnology studies necessary to be performed on transformational
nano-optoelectronic Soldier capabilities are listed below:

 Photonic ıntegrated circuits for LIDAR (Light Detection and Ranging),


displays & low-power Computing,
 Nanophotonics (or nano-optics) enhanced systems for the Soldier.
Figure 4.6.2.(a) and Figure 4.6.2.(b) show examples of using flexible solar
panels on Soldier helmets and backpacks. Portable power is provided by
batteries. Solar power harvesting technologies are utilized. Nanotechnology may
be used for flexible solar cells, and batteries.

~ 208 ~
Figure 4.6.2 (a) Soldier backpack with flexible solar cells,( b) Flexible Solar
energy harvesters in helmets
(Source Cloud Consulting International website-
https://cloudwiser.wordpress.com/2014/04/04/mc10-redefining-wearable-
internal-embedded-sensors-with-patented-stretchable-electronics-
nanotechnology)

The flexible solar panels approach the conversion efficiency of rigid silicon and
glass. They can also be incorporated into products such as military uniforms and
backpacks. By adding flexible panels to these items, soldiers will become their
own recharging stations. This results in less logistics of a fighting force in the
field, and less weight that each individual soldier must carry on his or her back.
Figure 4.6.3 features an alternative nanowire flexible solar cell solution which
can be used in military and industrial applications.
Figure 4.6.3 (a)Schematic images showing fabrication process of ~ 690 nm

thickness AZO/Ag NWs/AZO sandwich structure electrode and (b) Optical


image illustrating application of bending force applied to flexible Cu(In,Ga)Se2
(CIGS) solar cells
(Featured: Royal Society of Chemistry http://pubs.rsc.org/-
/content/articlelanding/2016/ta/c5ta09000h#!divAbstract, Source:[5])

~ 209 ~
Soldier Power/Energy Systems
Figure 4.6.4 a,b, and c show ways to charge future Soldiers’ batteries which in
turn powers future Soldiers’ equipment, without depending on electricity. Figure
4.6.4.a shows an approach to interoperable power solutions, while reducing
soldier power burden and enabling energy independence. Figure 4.6.4.b shows
alternative ways to utilize solar power and wind power and harvest the collected
energy. Nanotechnology batteries(lithium-ion batteries, silver-zinc batteries,
proton exchange membrane (PEM) fuel cells, etc.), nano-textiles or conductive
fabric, and nano solar cells are involved technological components. Wireless
powering of both the Soldier and military vehicles is an alternative emerging
approach of generating energy for military platforms. Novel wireless
communication tecnologies should also be adapted to nanotechnology devices
involved in these platforms.Unmanned air, ground, and naval vehicles can also
be driven using novel wireless communication systems and technologies.
Nanoelectronics is expected to reduce power consumption for processes.
Regarding signal processing, better signal transmit will be obtained, resulting in,
improved signal-to-noise ratio. (Noise is an unwanted disturbance in an
electrical signal.) Higher processing speeds, shorter transmit times, and higher
function density will be provided. Development and usage of nanoelectronics for
devices with high computing power and low power consumption will lead to
better dominance on informatics. Thus, nanosensor arrays will enable fast
recognition of threats in the battlefield (Chemical, Bio, Nuclear, Radiation or
Energy threats) by soldiers and sensor networks. Security of soldiers and
civilians will be increased and environmental security will be deployed.

~ 210 ~
Figure 4.6.4 (a) Future Soldier’s battery charge, (b) Powering Soldiers’
equipment
through motion, solar panels, and wind (renewable energy) , (c) Wireless
Powering of Soldiers’ equipment (Source: www.cerdec.army.mil)
The Soldier Helmet
The Soldier helmet (another platform) can be equipped with a sensor system that
performs tasks such as positioning, RF (radio frequency) and audio
communication, body condition sensing, EEG (electroencephalogram) monitoring
(for tracking and recording brain wave patterns), sniper detection and digital signal
processing. The helmet is a good base for sensor arrays (since it’s stable) and its
position is advantegous for sensors (it’s the highest point of the soldier.) Sensors
such as optical/IR(infrared) camera, RF array antennas for positioning,
microphones, bio chemical sensor arrays such as early warning system, and
wireless EEG (to observe brainwave activity) sensor can be integrated to the
helmet. The helmet should also provide anti-ballistic protection and it should have
light weight.

~ 211 ~
4.6.2 Bio Chemical Sensing, Health Monitoring, and Wo-
und Treatment in Soldier Suit
Soldiers should be early-warned about bio, chemical, nuclear or radiation threat.
A mobile warning system of detection and response is necessary. It is preferable
that the system is wearable, for the reason of convenience. Chemical
threathening substances are to be detected by nanosensors. These sensors should
be integrated in the Soldier nanosuit, through either woven or nonwoven
structures of nanofibers. The nanofibers sense, absorb and deactivate bio and
chemical agents. They also block off the ventilation of the suit whenever needed,
in cases where it’s necessary to turn the Soldier suit into a bio chemicals
protection suit.
The biochemical sensing system can be put on a small-sized(credit card sized or
less) card/board platform, namely, on a semi-active or passive sensor tag (size of
the sensor tag depends on technologies used on the tag) The sensor tag serves as
a reactive large area surface consisting of electrodes with carbon nanotubes,
nanofibers on sensor surfaces, reactive dielectric materials in capacitor RF
sensors etc. These sensors are to be scanned and read using wireless
communication. Current and promising candidate devices that can hold these
sensors are PDAs (personal digital assistant), wrist watches, glasses and health
monitoring wristbands. Artificial skin (wearable skin donated with camouflage
and resistance to extreme temperatures features, wearable GPS, battery and a
few sensors) and wearable computers are two novel areas under research,
devolpment and discussion for their specific military usages. Figure 4.6.5
features a PDA tied up to the arm of a Soldier.

Figure 4.6.5 A PDA tied up to the arm of a Soldier (Source: ISSSP-International


Strategic and Security Studies Programme,http://isssp.in/wearable-military-
technologies/)
~ 212 ~
Sensors on the sensor tags can have direct contact with the body or can be
nearby and they can check hydration levels, body temperature, glucose/lactate
levels, and ECG (electrocardiogram) patterns. A combination of an RFID-chip, a
biosensor and a RF antenna would be one of the possible sensor configurations
for this purpose. The “Senstenna” concept can also be utilised in these systems
(The Senstenna project, is an approach using 5th generation communication
systems and the field of Internet of Things (IoT). It is a device that uses the RF
wave of the communication module to detect different types of physical
quantities without using a specific sensor. Web site of the project:
http://www.smartilab.ma/smartypark/)).
Whenever necessary, local DSP (digital signal processor) can provide only
interpreted data to the Soldier’s use on PDA, watch, smart helmet etc. This
overall system to gather relevant information about the body at an early stage is
also a subject theme in prognostic or diagnostic early analysis.
A sensor tag card system can gather data, e.g. combined with acoustic info
(through microphone) and transmit these data via PDA to the medic and
commander. Acoustic sensors (ultrasound sensors) can detect bullet hits, bone
fraction and can detect noises of breathing, movement etc. RF-sensors can give
data regarding temperature, moisture levels, bacterial contamination. For
monitoring health condition of a Soldier, heartrate and heartrate variability
(ECG, stress monitoring), internal body temperature, respiration rate, and blood
pressure are necessary. Wounds can be covered with smart band aids which
monitor the moisture level, the bacterial activity and which release anti-
microbials on nanoparticles to kill bacteria.

4.6.3 Tracking, Tracing and Remote Identification of


Soldiers and other Platforms using RFID Tags
RFID (radio frequency identification) tag system uses radio frequency devices
for identification and tracking purposes. These devices can be produced at
nanoscale, using nanotechnology and nanomaterials. Flexible RFID tags can be
produced using nanotechnology. An RFID tag system includes the tag, a
read/write device, and a host system application for data collection, processing,
and transmission. An RFID tag consists of a chip, some memory and an antenna.
Soldiers may be identified via long-range RFID systems. Their positions may be
localized. In the same way, goods and vehicles may be identified and localized
~ 213 ~
for logistic tracking & tracing. The RFID tags can be passive (without power
source) or semi-passive/active (able to transmit information without
interrogation). They can have an incorporated sensor function and can possibly
possess a radar reflection characteristic. This characteristic would be used for
positioning and identification of objects on large distances. The RFID tags can
be integrated to the Soldier suit, helmet, or the boots. Currently, RFID tags are
starting to be used in place of barcodes. Figure 4.6.6 shows a flexible RFID tag.

Figure 4.6.6 A flexible RFID tag using nanotechnology


(Featured: https://www.researchgate.net/figure/Flexible-RFID-Tag-using-
Nanotechnology_fig1_262602489, Source: [7])

Current Research and Development Studies of


Nanotechnology in Military Applications
Some major research and development studies of nanotechnology in military
applications that are currently conducted are:
 thermal sensors,
 acceleration,motion and position sensors,
 miniature high performance camera systems,
 biochemical sensors,

~ 214 ~
 monitoring sensors (health, condition of equipment and munitions),
 drug/nutrition delivery systems,
 nano-machines(NEMs) to mimic human muscle action (artificial
muscle) in an exoskeleton,
 smart coatings,
 self-healing (self-repair) materials,
 smart skin materials,
 adaptive camouflage and other adaptive structures.

~ 215 ~
Conclusion
Nanotechnology military applications is one of the primary-importance
application areas of nanoelectronics, together with nanotechnology medical
applications. The impact of nanotechnology on future combat systems
or military platforms is dependant on the criteria that the military commands
require for future warfare operations. The following criteria may be underlined
within this respect and within the scope covering soldier nanotechnologies:
 highly flexible deployability and mobility (low weight, fast
deployment),
 effective intelligence (acquire and process data from battlefield),
 logistics sustainability,
 survivability and force protection,
 command, control, communication,
 endurance (self-supporting Soldier).
Nanoelectronics is discussed in the previous chapters of this book. It can be seen
that future nanotechnology devices and systems depend largely on developments
in nanomaterials, nanoelectronics and communication technologies. Emerging
technologies and related application areas mentioned in both Electronics
Applications and Military Applications sections of this book should be carefully
examined and interpreted. We should never forget that nanotechnology can
highly positively affect development of humankind, only if scientists’ ethical
approach to nanotechnology is self-adjusted by the whole society.

~ 216 ~
References
[1] Altmann, Jürgen (2006), Military Nanotechnology Potential
Applications and Preventive Arms Control, ISBN 0-415-37102-3,
Routledge Taylor&Francis Group
[2] http://web.mit.edu/isn/ , website of Institute for Soldier
Nanotechnologies (ISN)
[3] Simonis, Frank & Shilthuizen, Steven, Nanotechnology Innovation
Opportunities for Tomorrow’s Defense, Report TNO Science&Industry
Future Technology Center
[4] Karkare, M. (2010). Nanotechnology : Fundamentals and applications
(2nd repr. ed.). New Delhi: I. K. International Publishing House Pvt.
[5] Tsai, Wen-Chi and Thomas, Stuart R. and Hsu, Cheng-Hung and Huang,
Yu-Chen and Tseng, Jiun-Yi and Wu, Tsung-Ta and Chang, Chia-ho
and Wang, Zhiming M. and Shieh, Jia-Min and Shen, Chang-Hong and
Chueh, Yu-Lun, Flexible high performance hybrid AZO/Ag-
nanowire/AZO sandwich structured transparent conductors for flexible
Cu(In{,}Ga)Se2 solar cell applications, journal J. Mater. Chem.
A,2016,vol.4,iss.18,pg. 6980-6988,publ. The Royal Society of
Chemistry, doi 10.1039/C5TA09000H
[6] Wong, W.W.S., Emerging Military Technologies: A Guide to the Issues,
ISBN 9780313396137, 2013, Praeger Publ.
[7] Hamid, Zeeshan & Ramish, Asher. (2014). Counterfeit Drugs
Prevention in Pharmaceutical Industry with RFID: A Framework Based
On Literature Review. International Journal of Medical, Pharmaceutical
Science and Engineering. 8. 196-204.
[8] http://www.understandingnano.com
[9] https://spectrum.ieee.org/nanoclast/semiconductors/optoelectronics/grap
hene-gives-you-infrared-vision-in-a-contact-lens

~ 217 ~
4.7 PACKAGING APPLICATIONS
Gratiela Dana BOCA
bocagratiela@yahoo.com
UNIVERSITATEA TEHNICA DIN CLUJ-NAPOCA

INTRODUCTION
The future in product marketing it is in a continuouse change. A new type of
customer, needs and utilities for new generation of product are the bigest
provocation. An important vision for nanotechnology take in consideration the
food and new ways to protect the food under the international standards.
Aplications in areas of food packaging already exist and it is oriented to be
develop and be improved products characteristics:
 taste, color, flavor, texture and consistency of foods tuffs, increased
absorbtion and bio availability of nitrients and health supplements;
 new food packaging materials with improved mechanical barrier and
antimicrobial proprieties.
 nano-sensor for traceability and monitoring the condition of food
during transport and stoarge

4.7.1 Packaging
The English concept of "packaging" is much broader, it incorporates the
following functions:
1. protection,
2. conservation,
3. ease of use,
4. communication (through graphics, labeling)
5. sales facilitation it gives more importance to the commercial role of the
packaging.

~ 218 ~
The three directions which provide and define packaging:

coordinated system for preparing goods for


transportation, distribution, storage, retail
and consumption
Ambalajlama
güdümü way to ensure distribution to the final con-
sumer, at optimum and minimal cost;

technical and economic function to minimize


delivery costs

The direction which provide and define packaging

4.7.2 Nanotechnology and Packaging


The scope of Nanostructured Materials are:
✓ long shelt life and life cycle of product improving barrier, absorbing
compounds, UV absorber;
✓ hot fill by improving high temperature performance;
✓ flexible packing uses on thin films;
✓ functionality anti-temperature, anti-microbial sensors;
✓ smart tags.

Nanostructured materials are used in food safety nanotechnology for increasing:


gas barrier, oxigen barrier, food packaging, and films.

~ 219 ~
Packagings can be classified according to several criteria:
By the nature of material in cardboard packaging,
which they are made glass, metal, plastic materials, wood, textiles
and complex materials
By the fabrication system fixed packs,
removable packaging
collapsible packaging
By type of packaging daily boxes,
bottles,
bags etc
By field of use • transport packaging,
• presentation and sales packaging
By nature of the packaged • food packaging,
product • packaging for industrial products,
• packaging for dangerous products
By degree of stiffness • rigid packaging,
• semi-rigid packaging,
• flexible packaging
By way of circulation reusable packaging,
unusable packaging

How will nanotech be used for food production and


processing?
Packaging will mentain the consumer protection and avoid the uncertainties in
consumer safety and environment safety.

Food safety

CONSUME SAFETY Environmental


impact

Quality
ISO 22000

The relation between consumer and safety

~ 220 ~
 Lack of understanding on how to evaluate the potencial hazard of nano-
materials by oral food rute;
 Lack of tools to use to estimate exosure;
 Posibillity that the high surface area and active surface chemestry of
some nanomateerials could give rise to unwanted chemical reactions;
 Lack of understanding on the impact of nanotmaterials in waste disposal
streams.

4.7.3 Nanotechnology Packaging Design Strategy


Thanks to nanotechnology, tomorrows food will be designed by shaping
molecules and atoms. Food will be wrapped in smart package also known as
safety packaging which is able to detect spoilage or harmful contaminates.
A vision regarding the packaging design can be realized following the PDCA
cycle. Using the cycle it is possible to design a strategy and identify the steps for
implementation of nano technology. Step by step nanotechnology and the feed
back of packaging impact can be establish:
1. Plan-establish objectives and make plans;
2. Do- implement plans;
3. Check-measure results;
4. Act-correct and improve plans and how to put into practice.

Nano composite Biocides


TECHNOLOGY Antimicrobial packaging
Sensor
P

Improved performance
APPLICATION Active packaging
Indications D
Improved food
Quality Safety
TECHNOLOGICAL C
EFFECT
Increased communication

IMPACT Consumer preferences A


Sustainability Feasibility
The PDCA Cycle and nano - packaging impact

~ 221 ~
Nano Packaging it is an extended arm of Nanotechnology the concept were sus-
tain by Anupriya Dobhal (2016) (http://fmtmagazine.in/nano-packaging-
extended-arm-nanotechnology/).
The packaging industry is constantly changing and modern technologies and
researchers' research in the field lead to the emergence of new revolutionary
products. As everything moves towards a world as close as possible to nature it
is obviously that packaging is in line with this trend.
The researchers have created a truly innovative product with packaging made of
recyclable or biodegradable materials a constant and a daily present in actual
market. New types of packaging are especially targeted at the food industry.
Although it is accessible to a small segment of the population because of the cost
of these packages, they have not yet reached the supermarkets only abroad.
The nanotechnology packaging will occupy an important place in the future of
the packaging between a sophisticated system components of materials and
science processes.

SYS-
TEM
NANO

SCIEN
NANO CE
PACKAGING

SYSTEM COMPONENTS

MATERIALS NANO STRUCTURES


AT NANO SCALE AND PROCESS

Nanotechnology piramid system structure

Food packaging has been considered as a potential recipient of nanotechnology.


Nanotechnology offers tremendous opportunities for innovative developments in
food packaging, which can benefit both consumers and industry.

~ 222 ~
The application of nanotechnology shows considerable advantages in improving
the properties of packaging materials.
Nanotechnology offers three distinct advantages to food packaging:
1-barrrier resitance;
2-incorporation of active components to provide functional
performance;
3- sensing of relevant information.
Nanotechnology applications for food packaging offer a number of benefits:
a. innovative, improved, intelligent packaging concepts;
b. may enhance food safety and hygeine in the supply chain;
c. reduce food waste by extending shelf-life of food products;
d. improve poor performance of biopolymers.

4.7.4 Packages of the future


a. Packaging edible
The new type of edible packaging was created by Harvard's David Edwards
researcher. Named Wiki Cell, the new packaging is edible and consists of two
layers that are similar to the skin of the fruit.
Thus, the first layer that is totally edible has a skin resemblance to a grape, and
the second layer is tougher than an orange peel. The second layer may or may
not be edible, but is biodegradable.
Ice cream is the first product to be released in edible packaging, but the
researcher has launched several food packaging options. Specifically, the
packaging is made of a thin film of natural food particles that are supported by
nutrient ions. By creating these packages, it is desirable to gradually remove the
containers and plastic packaging in the food industry.
Another type of packaging are edible glasses for different type of fruits. The
fruit juice can be packed in a bag of peel of the specific fruit flavor.

~ 223 ~
Edible packaging( This picture is featured on Google images, taken from
https://www.finedininglovers.com/photo/cool-stuff/food-pack-wikicells/wikicells-edible-
glasses/)

In Brazil, another type of edible packaging is already marketed. Edible paper has
been created and marketed by a chain of fast food restaurants that packs into
burger pack.

Edible packaging ( This picture is featured on Google images, taken from


http://www.craiovacenter.com/Poze3/Bobs.jpg)

So Wiki Cell means a future without plastic bags or PETs thrown in forests or on
the roadside, which is excruciating.

~ 224 ~
b. Packaging that changes color
In Brazil, researchers at the University of Sao Paulo have created a technology
that allows the packaging to change its color when it comes into contact with an
expired product.
This package contains in its composition a pigment of plant juice called
anthrocian. Embedded in the packaging, the pigment has the ability to detect the
interval in which a food changes its pH, in other words, it is no longer good for
consumption. Thus, changing the color of the packaging will draw attention to
the fact that the food has exceeded its shelf life. Because the used pigment is
natural, the anthrocite pack can be used for any type of food. If the new
detection system is implemented, it will be able to combat premature throwing
of food.

Packaging that changes color when milk expires This picture is featured on
Google images, taken from :http://www.epresa.md/wp-
content/uploads/2012/05/10-604x330.jpg)

Additionally, the consumer will always be sure that the milk in a refrigerator for
a week can be safely eaten.

~ 225 ~
Although the idea is innovative and would be helpful to everyone, a series of
tests still need to be done to ensure that technology delivers the right results and
that it can accurately detect damaged food. Bu fikrin yenilikçi yaklaşımına ve
herkes için sağlayacağı yardıma rağmen, teknolojinin doğru sonuçlar verdiğini
ve doğrulukla bozulmuş gıdayı tespit edilebileceğine dair yapılması gereken
çalışmalara ihtiyaç bulunmaktadır.

c. Smart packaging and food tracking


Thorat (2016) consider smart packaging very important for food safety
management and as a tool at the disposal of product manufacture to make its
product stand out inm the shelf. Beyond shelf appeal, packaging also provides
protection and contaitment for the contents. Nano packaging system will help
dramatically extend the shelf life of food packaging and develop a new
generatıon of green eco products.Nanotechnology has shown many advantages
in different fields.
The uses of nanotechnology have progressed, and it has been found to be a
promising technology for the food packaging industry in the global market.
Teixeira (2016) and Tuan Ngo, (2011) in Tuan’s project develop a business plan
around the commercialization of an engineered nanomaterial and enhance an
existing consumer product and describe the materials selection process.
Smart packaging can be clasiffyed into the following types: passive, active,
intelligent and smart:
Passive packaging: refers to the traditional packaging that involves the use of a
covering material characterized by some inherent insulating, protective or ease
of handling qualities.
Active packaging: entails the concept of the package reacting to various stimuli
to keep the internal environment favorable for the products.
Intelligent packaging: refers to the concept of making innovations in the design
of packaging that renders it more useful for the consumer (packaging for
automobile oil, where package structure makes it convenient for the user without
getting his hands dirty.)
Smart packaging: refers to packaging that is made much more functional and
useful; it involves the use of technology that adds.
Smart packaging which contain nano-sensors and anti-microbial activators are
being engineered to be capable of detecting food spoilage and releasing nano-
~ 226 ~
anti-microbes which will extend food shelf life. By doing this it will enable
supermarkets to keep food for even greater periods of time before its sale date.
Food tracking devices such as the nano-sensors embedded into food products as
tiny chips that are invisible to the human eye, would also act as electronic
barcodes. These sensors would emit a signal that would allow food, including
fresh food to be tracked from paddock to factory to supermarket and beyond.

Edible packaging smart food (This picture is featured on Google images, taken
from http://www.openpr.com/news/470952/Edible-Packaging-Market-Demand-from-
Food-and-Beverage-Manufacturing-to-Impel-Market-s-Growth.html)

Smart foods are design to interact with the consumers so they can personalize
their food, by changing color, flavor, and nutrients on demand (by using a
microwave consumer would be able to trigger the release of the color, flavor,
concentration and texture of the individual‟s choice). The technique of
nanoencapsulation, or creating nanocapsules, involves coating a nanoparticle so
that its contents are released in a controlled way. Nanoparticle based intelligent
inks or reactive nanolayers provide analyte recognition at nanoscale. Printed
labels that can indicate: temperature, time, pathogen, freshness, umidity and
integrity.

~ 227 ~
4.7.5 Appplication of Nano-Materials in Packaging
After Qasim Chaudhry (2008), Bradley et. all (2010) we can identify the
following nanomaterials and their application taking in consideration their
charactersitics:
1) Polymer nanocomposite;
2) Nano coatings- incorporating;
3) Surface biocides;
4) Active packaging;
5) İntelligent packaging;
6) Bio-plastic.
Incorporating nanomaterials into packaging polimer it is possible to improve
physical performance, durability, barriers proprieties and biodegradation ( PET,
PVC, nylons) and some of polymer proprieties:
 strenght and stiffness;
 barrier to oxygen and moisture;
 barrier to migration or gas diffusion;
 resistance to food component;
 permeability;
 flexibility.
Derek Lam (2010) made a research regarding the application of nanotechnolgoy
in packaging field and design a selection way of materials used for food safety.

Basic Materials Food safety

MATERIALS PROCESSING PRODUCT PRODUCT


SAFETY
Nanoparticles Mass transfer Controlled Nano sensors
delivery
Nano-emulsions Reaction Formulation Nanotracers
engineeering

~ 228 ~
Nano composite Biotechnilogy Packaging
Nanostructure Molecular
Materials Syntehsis

Source: Adaptation after


https://www.linkedin.com/pulse/nanotechnology-redefining-beverage-
packaging-industry-food-marketing

Nanotech in Food Packaging redefine and redesign the beverage of packaging:


 1. Contamination Sensor;
 2. Antimicrobial Packaging;
 3. Improved Food Storage;
 4. Enhanced Nutrient Delivery;
 5. Green Packaging;
 6. Pesticide Reduction;
 7. Tracking, Tracing, Brand Protection;
 8. Texture Enhancer ;
 9. Flavor Enhancer ;
 10. Bacteria ID and Elimination .

~ 229 ~
References
Derek Lam (2010), Packaging Applications Using Nanotechnology, San Jose,
State University 2/10/10 , Pdf Retrieved 12 August 2017
izabeth H. Bradley, Mary L. Fennell, Sarah Wood Pallas, Peter Berman, Stephen
M. Shortell, Leslie Curry (2010) Health Services Research and Global Health,
Health Research and Educational Trust
http://www.epresa.md/wp-content/uploads/2012/05/10-604x330.jpg
http://onlinelibrary.wiley.com/doi/10.1111/j.1475-6773.2011.01349.x/abstract
https://www.slideshare.net/teixeiravasco/opportunities-and-challenges-in-
nanotechnologybased-food-packaging-industry (Opportunities and Challenges in
Nanotechnology-based Food Packaging Industry, V Teixeira)
http://fmtmagazine.in/nano-packaging-extended-arm-nanotechnology/
http://docplayer.net/48132592-Nanotechnology-applications-for-food-and-food-
packaging-nanotechnologies-in-food-packaging-what-is-nanotechnology.html
https://www.linkedin.com/pulse/nanotechnology-redefining-beverage-
packaging-industry-food-marketing
hhttp://docplayer.net/30907572-Nanotechnology-applications-for-food-
packaging.html
ttps://docslide.us/documents/evolution-of-packaging.html
https://www.linkedin.com/pulse/nanotechnology-redefining-beverage-
packaging-industry-food-marketing
http://www.epresa.md/wp-content/uploads/2012/05/10-604x330.jpg
https://storify.com/anurimamondal/nano-packaging-an-extended-arm-of-
nanotechnologyin
http://fmtmagazine.in/nano-packaging-extended-arm-nanotechnology
http://www.fnbnews.com/Top-News/advances-smart-packaging-in-food-safety-
management-38457
https://storify.com/anurimamondal/nano-packaging-an-extended-arm-of-
nanotechnologyin
http://fmtmagazine.in/nano-packaging-extended-arm-nanotechnology/

~ 230 ~
https://www.slideshare.net/teixeiravasco/opportunities-and-challenges-in-
nanotechnologybased-food-packaging-industry
https://www.linkedin.com/pulse/emerging-issues-food-processing-technology-
trend-vivekanand
http://voitlab.com/courses/thermodynamics/index.php?title=Tuan's(TuanNgo,
Walter Voit, (2011), MECH 4360 - Introduction to Nanostructured Materials)
http://emergingtech.foe.org.au/198/
http://nanotechinnove.blogspot.ro/
http://agrariangrrl.blogspot.ro/2009/01/mars-inc-and-seeds-of-change.html
http://www.openpr.com/news/470952/Edible-Packaging-Market-Demand-from-
Food-and-Beverage-Manufacturing-to-Impel-Market-s-Growth.html)
http://www.epresa.md/wp-content/uploads/2012/05/10-604x330.jpg
http://fmtmagazine.in/nano-packaging-extended-arm-nanotechnology/
http://voitlab.com/courses/thermodynamics/index.php?title=Tuan's
https://www.slideshare.net/teixeiravasco/opportunities-and-challenges-in-
nanotechnologybased-food-packaging-industry
http://www.fnbnews.com/Top-News/advances-smart-packaging-in-food-safety-
management-38457

~ 231 ~
~ 232 ~
SECTION 5
INTERNATIONAL NORMS
and REGULATIONS

~ 233 ~
~ 234 ~
5.1 INTERNATIONAL NORMS AND REGULATIONS

Gratiela Dana BOCA


bocagratiela@yahoo.com
UNIVERSITATEA TEHNICA DIN CLUJ-NAPOCA

INTRODUCTION
Nanotechnology has the ability to turn many industries, from medicine to
industrial processes, including the products they make. Nanomaterials can be
found in hundreds of products, ranging from cosmetics, clothing, in-dustrial and
biomedical applications. The potential benefits of nanotech-nology are enormous
and these benefits must be perceived by society. There is a continuing concern
that the full benefit potential for society may not be realized unless research
efforts are undertaken to support how to manage and control potential security
threats and Health at work related to the handling of nanomaterials. There are
still many gaps in how to safely work on using all these materials and world-
wide by providing solutions that will prevent illness and injury related to
workplace work.

5.1.1 Why do we neeed to develop Standards?


What it is a standard? According to International Standards Organisation (ISO)
a standard is a document that provides requirements, specificati-ons, guidelines
or characteristics that can be used consistently to ensure that materials, products,
processes and services are fit for their purpose.
Based on ISO Definition Nanotechnology is the application of scientific
knowledge to manipulate and control matter in the nanoscale in order to make
use of size- and structure-dependent properties and phenomena, as distinct from
those associated with individual atoms or molecules or with bulk materials
(http://docplayer.net/51034499-Nanotechnology-standards-development.htm).
Types of Nanotechnology Standards Developed
1. ISO Standards
2. Technical reports (TR) are issued when a technical committee or
subcommittee has collected data of a different kind from that normally

~ 235 ~
published as an International Standard, such as referen-ces and explana-
explanations.
3. Technical specifications (TS) may be produced when "the subject in
question is still under development or where for any other rea-son there
is the future but not immediate possibility of an agree-ment to publish an
International Standard".
Azmi Haji Idris (2014) present the importance of nanotechnology standards
development and also the harmonization with expertize of SIRIM Berhad
(formerly known as the Scientific and Industrial Research Institute of Malaysia)
which sustain that standards are required by industry, government and
consumers to:

Facilitate domestic and international trade;

S Enhance industrial efficiency and technological


T
development;
A
N
D Enforce regulatuions for public, saftey, health and
A environment protection
R
D
Enforce, regulation and prevention of deceptive
practice.

Standards for future nanotechnology development.


Adaptation after http://docplayer.net/51034499-Nanotechnology-standards-
development.html

5.1.2 What are the regulations for nanotechnologies?


Nanomaterials are treated like any other chemical, a substance that has to
comply with a set of regulations to be used in consumer goods and indus-trial
processes. On actual market different organizations call for specific regulation
for nanomaterials, because nanomaterials have special proper-ties that require
special attention.
There are no specific regulations on nanomaterials, but after revisions and
estimates, they are well controlled by current regulations
~ 236 ~
(https://www.noexperiencenecessarybook.com/8DL1o/observatorynano-report-
wp6-regulationstandards-pdf.html)
https://www.noexperiencenecessarybook.com/8DL1o/observatorynano-report-
wp6-regulationstandards-pdf.html).
At this stage, there is no law requiring the inclusion of a specification on the
content of nanomaterials, except for cosmetics and food, on the product label,
which should mention this in the ingredient list.
Nanotechnologies at this moment are under the protection of current legis-lation,
such as regulation REACH (Registration, Evaluation, Authorization and
Restriction of Chemical Substances) which represent regulation of the European
Community relating to chemicals and their use in the environment security,
knowledge of nanomaterial characterization and risk exposure, too.

5.1.3 ISO/TC 229 on Nanotechnologies


It is obvious especially now that standardization needs to be provided, as a
matter of fact ISO has created Technical Committee ISO / TC 229.
They need to be improved in the future and with a specific international
legislation and collaboration it’s imperative. The information has been brought
together to help governments and manufacturers with infor-mation’s and take
decisions about the economic potential of nanotechnologies.
The Committee brings together the community metrology and sciences to
discuss the challenges of nanomaterial measurement, thus validating the
fundamental requirements.
Effects are visible: published standards ensure a slow transition from la-boratory
to market, which facilitates progress on the value chain of nano-technologies and
world trade.
Life become easier with the elaboration of a guide in a simple language on
nanotechnologies ISO/TR 18401- currently under preparation which will allow
those who are not initiated in the field to acquire a practical under-standing of
the use and application of nanotechnologies.
(https://www.noexperiencenecessarybook.com/WzKda/nanotechnology-for-
food-applications-current-status-and-consumer-safety-concerns.html)
A huge task awaits ISO / TC 229, which will have to cover new advances in
nanotechnologies, in the medical field and in the wide range of applica-tions of
2D materials and graphene.
~ 237 ~
5.1.4 ISO/TC 229 on Nanotechnologies Objectives
Nanotechnology standards development are under current activities of ISO/TC
229 (Technical Committee on Nano-technology) having the role:
1. To define and develop an unambiguous and uniform terminology and
nomenclature for nanotechnologies;
2. To facilitate communication and promote common understanding;
3. To develop standards for measurement, characterisation and test
methods for nanotechnologies taking into consideration needs for
metrology and reference materials;
4. To develop science-based standards in the areas of health, safety and
environmental aspects of nanotechnologies.
Nanotechnologies objectives using standards, relevant norms and regulations
will:
 Support the sustainable and responsible development and global
dissemination of these emerging technologies;
 Facilitate global trade in nanotechnologies, nanotechnology products
and nanotechnology enabled systems and products;
 Improve quality, safety, security, consumer and environmental
protection, together with the rational use of the natural resources in the
context of nanotechnologies;
 Promote good practice in the production, use and disposal of
nanomaterials, nanotechnology products and nanotechnology enabled
systems and products.
From Robin Williams (2013) point of view management reviews are also used
to identify and assess opportunities to change an organization’s policy and
procedures, to address resource needs, and to look for opportunities to improve
its products or services.
Nanomaterials in terms of understanding, needs also predictability and
management of potential health risks to workers. Another standard and norms
are necesary for food applications of nanotechnology, to identify and prepare the
right answer regarding comoun standards and rules to be followed in such an
important field. Only certain nanomaterials are potentially dangerous, but the
absence of some systematic studies and regulations (standards) create a goal for
the development of the field.

~ 238 ~
In oposition, it is the antinano public campaign, and we have to mention here
the case of nuclear power plants or genetically modified organisms.
The change management in organization because of the new trend nano, needs
investments in technologies and products. In the absence of a clear and specific
standardization, between consumer and product along the life cyle from the
design stage, the norms and regulations are very important.
The next generation of products are likely to be available following the norms:

NANOMATERIALS

CONSUM- PRODUCT
ER

Concerns over consumer New tastes, flavours, textures,


safety; greater nutritional value, shelf
life,
better traceability and safety,
Consumers information/ less salt, sugar, fat and
involvement

Maintenance of quality and


freshness,
Consumer information/
involvement/ education a
must for the success of Potential benefits for industry and
nanofoods. consumer

NORMS

 basic research into potential health


 effects of nanofoods
 a vigilant self regulation/ best practice by the in-

Adated after: http://docplayer.net/51034499-Nanotechnology-standards-


development.html
In Europe has just begun talking about the importance of nanotechnolgy
standards, norms and reglementations. From that point of view, enginee-ring
~ 239 ~
ethics need to be defined before the commercial use of nanotechno-logy. It is
important to remember that nanotechnology can be used in a positive way.
Nnaotechnology use helps and improve products, but their safety to hu-man
health and the environment has not been under-stood well.

5.1.5 Norms and Reglementations Related to


Nanotechnology
V.D.Shah et all (2015) expressed worries about possible long-term effects
associated with medical applications and nanomaterials which would be
biodegradable.
Analogies were made with plastics, which have proved to have accom-
panying adverse effects on individuals and the environment. Nanomateri-als
incorporated into the manufacturing fabrics may get washed out and contaminate
the environment. another aspect is the health impact of nano-technology and
possible effects of nanotechnological materials and devi-ces will have on human
health need to followed some norms and rules.
Regulation is essential, but is also difficult because nanoparticles behave
differently in different products.
The problem is related to the use of nanoparticles or their appearance due to the
use of technological materials or processes. Certain types of nano-particles are
currently being studied in medicine to be used for early detec-tion and treatment
of diseases, but nanoparticles can also be dangerous for the body. They have
been in cosmetics for over two decades, but also in some paints. The issue is not
only about nanotechnologies, but of ex-tremely different materials and products
and the whole life cycle of them.
By comparison, nanoelectronics, which uses nano-scale structure in the current
technique manufacture of circuits and electronic systems does not lead to
dangerous products. Risks related to the use of nanotechnologies and products
containing nanoparticles can not be avoided.
Some concern related yet are about:

~ 240 ~
NANOTECHNOLOGY CONCERNS

ENVIRONMENT
 Nanomaterials may pose significant health, safety and
environmental hazards;

HEALTH
 The health effects of many nanomaterials are either unc-
lear or unknown;

INFORMATIONS
 No government oversight and no labeling requirement for
nano products;
 The public are not well informed on the potential risk of
nano-products;

NORMS AND REGULATIONS


 No nano-specific regulations available;
 No standard test methods for human exposure measure-
ment to nanoparticles;

PRODUCT
 Ineffective or non-existence of methodologies to conduct
risk assessments, toxicological assessments and life cycle
analysis of product containing nanomaterials;
 Traditional methods of detecting, analysing and measure-
ment of micron-sized materials are ineffective in the mea-
surement of nanoparticles.

Adapted after : http://docplayer.net/51034499-Nanotechnology-standards-


development.html

~ 241 ~
5.1.6 Nanotechnology Norms Needs Issues
Nanomaterials may pose significant health, safety and environmental hazards;

 No government oversight and no labeling requirement for nano-products;


 Need to come with the guidance on labelling of manufactured nano-objects
and products contains manufactured nano-objects;
 No nano-specific regulations available;
 The public are not well informed on the potential risk of nano-products;
 The health effects of many nanomaterials are unclear or unknown;
 Ineffective or non-existence of methodologies to conduct risk assess-ments,
toxicological assessments and life cycle analysis of product containing
nanomaterials;
 No standard test methods for human exposure measurement to nano-particles;
 Traditional methods of detecting, analysing and measurement of micron-
sized materials are ineffective in the measurement of nanoparticles.
Developments in Nanotechnolgoies Regulations and Standards (2009),
(https://www.noexperiencenecessarybook.com/8DL1o/observatorynano-report-
wp6-regulationstandards-pdf.htm) create also some of the leading
nanotechnology standards setting organizations whihc are
(http://docplayer.net/51034499-Nanotechnology-standards-development .html):
 International Standardization Organization Technical Committee
ISO/TC 229 on Nanotechnologies
 ASTM (formerly known as the American Society for Testing of
Materials) International’s Committee E56 (Nanotechnology)
 International Electrochemical Commission Technical Committee
IEC/TC 113 (Nanotechnology Standardization for Electrical and
Electronic Products and Systems)
 Organization for Economic Cooperation & Development (OECD)
 Working party on Manufactured Nanomaterials
 (WPNM) –coordinate and collaborate on approaches for better
understanding the environmental, health and safety impacts and benefits
of nanotechnology

~ 242 ~
What are the risks?
Risk assessment and life cycle assessment and other areas, such as: the
development of methodological, the modeling approaches, the development of
materials and methods to enhance nano safety. Nanotechnology implication in
our life can be in a positive ( benefits) or negative (risk) way:

RISK

NANOTECHNOLOGY IMPLICATIONS

BENEFITS

SAFETY ENVIRONMENT HEALTH

Source: Adapted after http://docplayer.net/51034499-Nanotechnology-


standards-development.html

What are the Implications of Nanotechnology?


Nanotechnology has the potential to deliver important health, safety and
environment benefits such as:

NANOTECHNOLOGY
BENEFITS

Environment  self-repairing, and able to adapt to pro-


vide protection and reducing energy con-
sumption, pollution,
 greenhouse gas emission,
 remediating environment damage.

Safety  offering new safety


 enhancing materials that are stronger

 curing, managing,
Health
 preventing diseases.

Nanotechnology benefits Source: Adapted after http://docplayer.net/51034499-


Nanotechnology-standards-development.html
~ 243 ~
Nanotechnology is a health risk?
As nanotechnology is an emerging field, there is great debate regarding to what
extent nanotechnology will benefit or pose risks for human health.
Nanotechnology's health impact can be split into two aspects:
1. the potential for nanotechnological innovations to have medical
applications to cure disease,
2. the potential health hazards posed by exposure to nanomaterials.
Amin and Shan (2015) specify that nanotechnology has direct beneficial
applications for medicine and the environment, but like all technolo-gies it
may have unintended effects that can adversely impact the envi-ronment, both
within the human body and within the natural ecosystem.
While taking advantage of this new technology for health and sustai-
nability benefits, science needs to examine the health implications. The same
properties of nanoparticles that make them so appealing to manu-facturers may
also have negative effects on the environment and human health. The common
desire is for this technology to progress, while ensur-ing that workers and
consumers are not exposed to risk.
Robin William (2013), sustain clear principles and objectives drive continu-ous
improvement of nanotechnology, nano materials practice.
Whilst the implementation of a HR Management System Standard (HRMSS) is
about policies, procedures and systems, the people context must not be
forgotten. It is therefore important to hold the following prin-ciples:
1. Ethical conduct: the foundation of professionalism;
2. Trust, integrity, confidentiality and discretion are essential to HRM(
Human Resources Management);
3. Fair presentation: the obligation to report truthfully and accurately;
4. Due professional care: the application of diligence and judgment in
HRM;
5. Risk-based approach.

Consumer Health Concerns


Properties of nanoparticles may differ from conventional forms, growing
scientific evidence indicates that:

~ 244 ~
– free nanoparticles can cross cellular barriers, and may reach those tar-
gets in the body where larger equivalents could have not reached;
– exposure to some can increase production of oxyradicals that may lead
to oxidative damage and inflammatory reaction.

Inhalation Other


Nanoparticles Cells
– Skin application
– Tissues
Ingestion

Nanotehnolgy and Health concerns(Adapted after


http://docplayer.net/51034499-Nanotechnology-standards-development.html)

If we take in consideration the positive efects ( benefits) upon health by food


industry whilst promoting nanofood products, we can mention that:
 there are clear advantages in the use of nanotechnology over other
available technologies;
 the benefits outweigh any risks, and the risks are acceptable need for an
industry body to assure product quality;
 promote research to fill knowledge gaps, assess risks and benefits, and
ensure regulatory compliance;
 case-by-case assessment to segregate products into risk categories;
 consumer information, involvement and education in regard to benefits
as well as possible risks−possible voluntary labelling.

Nanotechnology is a risk on new nano textile materials?


Another risk assessment on new nanomaterial like textile application is
important and need to be evaluate. Bihola et all (2015) identify that fibres and
textiles with Nanoscale features may be built into fibres and textiles in different
ways production of fibres with diameters of nanoscale dimensions. These
fibres are described as nanofibres incorporation of nanomaterials into fibres to
produce nanocomposite fibres and the second type coating of fibres with films or
related structures
(https://www.researchgate.net/publication/289916299_Adverse_Health_Implicat
ions_Of_Nanotechnolgy_Textile_Applications).

~ 245 ~
Nanotechnology is a risk to the environment?
All products become waste at the end of their life. Dinsa Sachan, (2011)
considered also Nanotech like a mega hazard. Could these residues inter-fere
with animals and plants and could cause harmful effects? Scientists are also
analyzing whether there are safety concerns for washing garments containing
nanomaterials.
Nanoparticles are not a novelty, and even if we should do more research on their
safety, we should try not to put too many obstacles to the devel-opment of this
research area (https://www.elsevier.com/connect/uncovering-health-and-
environmental-risks-of-nanomaterials).

Nanotechnology is a risk to worker exposure to the effects


of nano-materials ?
The recommendations are based on the technologies currently applied in various
industries using nanomaterials and on the basis of control methods that have
been shown to be effective in reducing exposure to work-places in other types of
industries. The recommendations are contained in a new document entitled
"Current Strategies for Industrial Control in Nanomaterials Production and
Down-stream Handling Processes".
Technological controls are preferred to administrative controls and protec-tive
equipment to reduce worker exposure as they are designed to remove the hazard
from the source before contacting the worker. However, evidence to highlight
the efficiency of controls during the manu-facturing process and the downstream
use of technological nanomaterials in specific applications has been fierce.
The consumer product market currently has over 1,000 products containing
nanomaterials, including make-up products, sunscreens, food, appliances,
clothing, electronics, computers, sports equipment and coatings (with dif-ferent
purposes).
As more and more products containing nanomaterials are introduced to the
market, it is essential that manufacturers and beneficiaries of technological
nanomaterials provide:
• a safe and healthy work environment for which processes con-trols are
recommended;
• described operations and refining processes;
• small scale weighing;
• maintenance activities.
~ 246 ~
References
Azmi Haji Idris (2014) Nanotechnology Standards Development, National
Workshop on Nanosafety and Regulatory Aspect of Nanotechnology, 29 –30
October 2014,
D.V. Bihola ,H N Amin ,V D Shah,(2015), Application of Nano Material to
Enhance Acoustic Properties, International Journal of Engineering Science and
Futuristic Technology IJESFT 12 (2015) 001-009, Volume 1 Issue 12,
December 2015
http://docplayer.net/51034499-Nanotechnology-standards-development.html
http://nanopinion.archiv.zsi.at/en/about-nano/what-it-about.html
http://www.downtoearth.org.in/news/nanotechs-mega-hazard-34108
http://www.petrosains.com.my/pusat2008/nanotech1.html#nanotechnology
https://www.elsevier.com/connect/uncovering-health-and-environmental-risks-of-
nanomaterials
https://www.researchgate.net/publication/235751079_Canadian_Standards_Asso
ciation_CSA_Z12885-
12_Nanotechnologies_Exposure_Control_Program_for_Engineered_Nanomateri
als_in_Occupational_Settings
Li et al. (2003) Ultrafine particulate pollutants induce oxidative stress and
mitochondrial damage, Environmental Health Perspectives 111(4): 455-460.
V D Shah,(2016), Adverse Health Implications Of Nanotechnolgy Textile
Applications .Available from:
https://www.researchgate.net/publication/289916299_Adverse_Health_Implicati
ons_Of_Nanotechnolgy_Textile_Applications [accessed Aug 15, 2017]
http://docplayer.net/51034499-Nanotechnology-standards-development.html
https://www.noexperiencenecessarybook.com/8DL1o/observatorynano-report-
wp6-regulationstandards-pdf.html
https://www.noexperiencenecessarybook.com/WzKda/nanotechnology-for-food-
applications-current-status-and-consumer-safety-concerns.html
https://hrtoday.me/2013/07/17/hr-standards-for-south-africa-creating-an-
integrated-approach-to-governance-risk-and-compliance-dr-michael-robbins/

~ 247 ~
~ 248 ~
SECTION 6
NANOTECHNOLOGY and
INNOVATION

~ 249 ~
~ 250 ~
6.1 INNOVATION in NANOTECHNOLOGY

Massimo BERSANI
bersani@fbk.eu
FONDAZIONE BRUNO KESSLER

INTRODUCTION
Innovation is the story of the human race [1]. It is the base of industrial
revolutions and it characterize our development.
The first extensive analysis of innovation were performed by Joseph Schumpeter
in his book “Theory of economic development” (1912) [2]. From this data
hundreds of books and thousands of papers have been published on innovation,
anyway it remains a mystery. Today we know better the Universe origin or the
fundamental of quantum matter than innovation, for a simple and trivial reason,
the innovation is not a science but a complex and mutable human activity.
Many definitions of innovations are available here we report only two of the
more classic:
“Innovation: introduction of new or significantly improved products (goods or
services), processes, organizational methods, and marketing methods in internal
business practices or in the open marketplace. R&D and other intangible
investments such as investments in software, higher education, and worker
training are key inputs driving innovation” [3]
Technological paradigm as ‘model’ and ‘pattern’ of solution of selected
technological problems, based on selected principles derived from natural
sciences and on selected material technologies. Technological trajectory: pattern
of ‘’normal’’ problem solving activity (i.e. of progress) on the ground of a
technological paradigm. [4].
One main point that is important to point out is that Research & Development
and their results as scientific models and technology inventions are not
themselves innovation. Innovation is a different and complex process that cannot
mixed with the R&D activity. The use of innovation as synonymous or extension
of the research activity is completely wrong and misleading.
~ 251 ~
Research & Innovation is today used as a single term intending an unique and
continuous process. This conviction comes from Information technology field
where innovation and market time are really rapid and in many time technology
assessment can be overland with innovation processes. Unfortunately many
cases and especially in the bold technology this is far to be true. Table 6.1.1 is
reported main characteristics that distinguish Research from Innovation process.

Table 6.1.1 Comparison between Research and Innovation characteristics

Research Characteristics Innovation Characteristics


A creation process A creative/disruptive process
Well-defined actors for training and Actors with many and different
working fields competences and skills
Linear and simple process Not linear process with many driven
forces
Even in case of failure an useful A failure is total defeat
result is obtained
One kind of research Many kinds of innovation
The products destination field is The final scope is the society/market
always, in first approximation, in
research aim
Use money to produce knowledge Use knowledge to produce money

A final general remark have to be done also on the innovation results. If you
arrive on the market with your final product is not the guarantee that you achieve
an innovation. Only if your product or process has a positive impact on market
or society the innovation can be define done.
About Innovation on nanotechnology we face, if it is possible a more complicate
and peculiar case. The main proposition that represent the thesis of this
contribution is: After more than 20 years of full activity nanotechnology is still a
promising field and the Gap between Research and Society/Market impact is
increasing.
So despite the terrific previsions and many assisted research results the impact of
Nanotechnology on our life and economy is important but no so really disruptive
as it was forecasted at the beginning of the 2000.
On the research point of view the public funds reached a plateau globally
evaluated in around 8 billion of dollars. This value was more or less constant in

~ 252 ~
the last years. The behaviors of the research in nano materials and nanotechno-
nanotechnologies it was impressive in Table 6.1.2 has are reported USA annual
public funds from 1997 [5].

Table 6.1.2 Government US funds invested in Nanoscience and Technology in


the time
Years US government funds
1997 $116 million
2001 $464 million

2005 $1081 million

2007 $1.4 billion

2010 $2.2 billion

2012 $2.2 billion

2016 $1.5 billion

On globally level the main historical country that supported the nanotechnology
were USA and Europe, in the last 5 year also Asia invested a lot on
nanotechnology in particular by the action of China and South Korea. The total
amount yearly invested is important but for comparison the total cost of the
Apollo space Program from 1961 to 1973 was about $25.4 billion dollars. In
today's dollars, it would be over $10 billion per year only by US government.
The public funds had a crucial impact on scientific outputs. Publications,
researchers involvement, inventions are increased in exponential way. For
examples Patens on nanotechnology globally published in the 2012 were over
14.000 [6].
On the other side the improvement of nanotechnology related revenues was not
so impressive and in particular considerably lower than expected. In Table 6.1.3
are reported the total worldwide sales revenues related to nanotechnology [7].

~ 253 ~
Table 6.1.3 Revenues value related to “nanoproducts”

Years Revenues CAGR

2005 9.4 B$ 5.4%

2009 11.6 B$ 17.8%

2013 22.3 B$ 20.7%

2016 39.2 B$ 17.9%

2019 64.2 B$ 18.7%

2021 90.5 B$ 18.2%

As it possible to observe The Couponed Annual Grown Rate is significant but


quite assisted and in line with the value expected for a mature and stabilized
market. Anyway the Annual revenues are far from the value of 1 Trillion of Euro
forecasted in the 2000 for the 2015 [8].
So the a lot of scientific and technology results have been carried out but a
limited of the achieved the marked improving innovation impact.
The reasons of this effect are mainly due to the characteristics of
nanotechnology innovations:
1) First of all it not exists a “nanotechnology emerging market” ,
comprised of “nanotechnology companies” selling “nanotechnology
products”, but there is a nanotechnology value chain. In fact
nanotechnology impacts in several really different applications and
market fields, each one with peculiar characteristics and different
driving forces. Indeed different business models are required and
applied [9].
Some market were nanotechnology had an important impact are for examples:
 Cars;
 Clothing;
 Airplain;
 Computer
~ 254 ~
 Consumer electronic devices
 Pharmaceutical
 Plastic containers
 Appliances
 Medical devices
In Figure 6.1.1 is reported a schematic representation of the nanotechnology
Value chain. The nanomaterials that represent the raw supply of the chain
represents the 90% of the total revenue amount. Indeed in the main case the
nanotechnology contribution it is far the market applications.

Figure 6.1.1 Nanomaterial value chain [7, 9]

2) Not all nanotechnology is new. Emerging nanotechnology is developing


against a backdrop of established nanotechnology [9]. In fact some
nanotechnology products were previously present in some applications
as coatings and D-ram
3) Not all “nano” has the potential for huge profit margins. Many products
incorporating nanotechnology will be only marginally profitable [7]. For
example plastic bottle can incorporate several nanotechnology solution
as several coating barriers, food quality sensor, R-Fid markers, UV
blocking films. These solution improve a lot the quality of the plastic
bottle but can impact only marginally on the cost.

~ 255 ~
4) Nanotechnology is based on multidisciplinary and interdisciplinary sci-
ence and technology, this aspect introduces further potential barrier on
communications and on relationships between R&D field and
innovation processes.
5) Lack of metrology standard. The metrology related to nanotechnology is
not fully developed there are many difficulties: a limited number
analytical techniques able to give chemical and morphological
information at nanolevel in the same time: difficult to develop suitable
and reproducible analytical methodology; difficult to produce a wide
group of reference materials.
6) Nanotechnology is a Bold Science, that require Big Money, Growing
risk on financial point of view. In general bold sciences require longer
time for technology development that Information technology in order to
fix a technology ready to approach the market. Moreover development
and infrastructure for initial production are really expensive required
initial high investment. This is a specific barrier for a disruptive
innovation where the market is not defined and it is difficult to forecast
the economic breakeven.
7) In many application cases Innovation trajectory of existing technologies
is not over. Indeed the related products are not pass away and further
development are possible without a radical change of the actual
technology. Moreover in this situation new nanotechnology innovation
has not to improve production cost in fact the net sell margin results a
main parameters to decide the investment.
8) Safety and environmental risks, Ethical issues. Nanotechnology might
have negative effects om people and environment. There is a real risk to
face in future an asbestos effect where the nanomaterial introduced can
revels a really negative impact on human life. In the best case
nanomaterials can be more dangerous than ordinary materials in a
limited group of cases. In the worst case several nanomaterials will have
a negative impact with strong control difficulties. Moreover the
~ 256 ~
perceptual risk from people is growing and there is the possibility that in
the future nanotech comes to seen a synonymous with danger inducing
negative market effect. Finally the regulation on nanomaterials is not
completely developed and assessed and in the future the regulation
could slow or block nanotech innovation. Ethical issues can be also
focused on possible dual use of nanotech. In the period 2018-2020 Main
H2020 MNBP funds will be focused on safety and environmental risks.
9) Until now and effect of over patenting characterized the
nanotechnology. Several patents requests are characterized by: the
granted claims are overly broad; overlapping patents; lack of invention
specifications [6]. Patens have the goal to protect the produced IP but in
general can be a barrier for the innovation.
10) Innovation is driven by big firms. The paper “Which model of
technology transfer for nanotechnology? A comparison with biotech and
microelectronics”; Techinovation, Elsevier, 2012, 32 (3-4), p.205-215
[10], demonstrated that Nano innovation is driven by large firms. The
effect is induced by: high equipment cost, high startup investment, part
of value chain, multimarket goal of nanotech. Unfortunately the intrinsic
characteristics of the big firms induced the following characteristics of
their innovation [11]:
11) The innovation is more polarized on sustaining than on disruptive
technologies
12) Technological progress often outstrips the needs of the market
13) Customers and financial structures heavily conditioned the management
innovation strategy
14) The Nano innovation globalization is far to be uniform in terms of
effort, approaches, regulatory, and impacts. So we find a really different
investment scenario and strategy on nanotechnology in the different
countries [12]. Also between European countries there is a huge
difference as it is possible to observe from the table below [13; 14].

~ 257 ~
Table 6.1.4 Comparison of nanotechnology indicators in European countries [13,
14].

Şirketler Norm. Patentler Norm. Yayınlar Norm.

Almanya 380 0.46 3730 4,55 6446 7,86

Birleşik Krallık 285 0,46 942 1,53 2688 4,36

Fransa 135 0,21 998 1,55 1491 2,32

İtalya 90 0,15 130 0,22 955 1,59

İsveç 80 0,86 224 2,42 816 8,82

İsviçre 80 1,04 314 4,08 1031 13,39

Hollanda 75 0,45 720 4,37 650 3,94

Finlandiya 45 0,84 75 1,41 494 9,27

İspanya 40 0,09 14 0,03 409 0,89

Belçika 38 0,35 110 1,02 319 2,97

Danimarka 30 0,54 70 1,27 191 3,47

Actions on nanotechnology are mainly developed on national base with wide


differences in investment and specific strategy. Also markets present a different
receptivity. Indeed on globally level we find a not uniform development and
innovation possibilities.

The nanoinnovation characteristics are in some case barrier to its innovation


process introducing a delay and limiting the development. In particular there is a
moderate attitude to risk in radical innovation preferring market driving
solutions. These general considerations are declinated in a different way for the

~ 258 ~
different application markets and some really good success cases were obtained
and they had a relevant impact on the market.

Finally we can say that the nao-innovation ecosystem is not yet fully
development and different actions are required to finalize the huge potential of
nanotech.
We are in a red queen effect:
“Well, in our country," said Alice, still panting a little, "you'd generally get to
somewhere else—if you run very fast for a long time, as we've been doing. My
dear, here we must run as fast as we can, just to stay in place. And if you wish to
go anywhere you must run twice as fast as that. [15].”
Indeed a double effort that maximum has to invest in all the main aspect of nano
world.
 Research
 Development
 Formation
 Innovation
 Risk evaluation – Regulatory.

~ 259 ~
References
[1] The rainforest; V. Hwang, G. Horowitt; Regenwald 2013 California
[2] Theory of economic developmentJ. Schumpeter 1912.
[3] Oslo Manual:Guidelines for Collecting and Interpreting Innovation Data, 3rd
Edition, 2005.http://www.oecd.org/sti/inno/oslo-manual-guidelines-for-
collecting-and-interpreting-innovation-data.htm
[4] The Nature of the Innovation Process.G. Dosi, C. Freeman, R. Nelson, G.
Silverberg, & L. Soete (Eds.), 1988.
[5] Trends in worldwide nanotechnology patent applications:1991 to 2008 Y.
Dang, Y. Zhang, Li Fan, H. Chen, and M. C. Roco; Journal of Nanoparticle
Research 2010 Mar; 12(3):687–706.
[6]Nanotechnology, IP & University-Industry Collaboration:Trends and Best
Practices; K. Hanson; In PART 2017.
[7] Nanotechnology Commercialization –Industry and Environmental Impacts.
Workshop on Nanotechnology Lifecycle Assessment –October 2-3, 2006 M.
Holman, Senior Analyst, Lux Research
[8] The Maturing Nanotechnology Market:Products and Applications; BCC
Market report 2017.
[9] Nanotechnology’s Impact on Consumer Products:Luxresearch 2007.
[10] Which model of technology transfer for nanotechnology?A comparison
with biotech and microelectronics; C. G. Khalid Errabi, C. Gauthier
Techinovation, Elsevier Volume 32, Issues 3–4, 2012, Pages 205-215.
[11] The Innovator's Dilemma; Clayton M. Christensen, Harvard Business
Review Press, 1997.
[12] Nanotechnology systems of innovation—An analysis of industry and
academia research activities; Kumiko Miyazaki, Nazrul Islam, Technovation 27
(2007) 661–675.
[13] The European Nanotechnology Landscape Report; by ObservatoryNANO
[14] Nano.DE-Report 2013 Nanotechnology in Germany today.Pubblisher
Federal Ministry of Education and Research (BMBF)Department New
Materials, Nanotechnology http://www.bmbf.de, 2013.
[15] Through the Looking-Glass; Leiws Carrol, 1871.

~ 260 ~
QUESTIONS

~ 261 ~
~ 262 ~
QUESTION 1) SEM image magnification increases by increasing the PE
scanning range on the specimen.
a) Right
b) Wrong.

QUESTION 2) PEs, BSEs and SEs are different particles.


a) Right, they are different electrons.
b) Wrong, they are all electrons, They differ only in energy levels.

QUESTION 3) Enhancement of SEM image surface details of thin films


requires the highest PE energy provided by the instrument.
a) Right
b) Wrong

QUESTION 4) In order to obtain SEM imaging on the ROI, the specimen must
be conductive and grounded. If the sample is electrically insulating, a conductive
coating can be deposited on the specimen surface in order to dissipate the
excessive charge accumulated by the impinging PEs on the ground.
a) Always
b) Sometimes

QUESTION 5) What's the difference between AFM and STM?


a) They are both SPM techniques but STM can measure conductive
samples only
b) Only AFM is an SPM technique as the probe is made of silicon
c) They are both SPM techniques but AFM can measure conductive
samples only
QUESTION 6) Can AFM scan in a liquid environment?
a) No, the refracting index in water is different from that in air
b) Yes, if the tip is not hydrophobic
c) Yes, if the AFM system (head and controller) is designed for these
purposes.

~ 263 ~
QUESTION 7) What probes should I use?
a) In contact mode soft cantilevers should be used to minimize damage
to the sample and the tip. However, very soft levers are noisy.
b) In tapping mode stiff levers are used so that the tip does not stick to
the sample surface
c) Both the above answers are correct

QUESTION 8) I would like to estimate tip radius of the probe. What do you
recommend I use for this purpose?

a) A calibrated grid sample to use the stiff walls as reference

b) An array of triangular steps having precise linear and angular sizes

c) An array of sharp tips

_|__|__|__|__

QUESTION 9) Why the ions have a smaller wavelength respect electron at the
same energy?

QUESTION 10) In Helium Ion Microscopy which is the monitored signal?

QUESTION 11) How is possible to obtain a charge compensation in HIM


measurements?

QUESTION 12) It is also possible to obtain nanofabrication by HIM?

QUESTION 13) What are the interactions of X-Rays with matter and one is the
dominant at low photon energy?

~ 264 ~
QUESTION 14) What is the physical basis for the qualitative elemental analysis
in XRF?

QUESTION 15) What are the most common X-Ray detection methods used in
XRF?

QUESTION 16) Why does XRF show low sensitivity for light elements?

QUESTION 17) Why does total external reflection occur in the X-Ray Range?

QUESTION 18) What is TXRF and what is its main application?

QUESTION 19) How can total external reflection be exploited to gain surface
sensitivity in XRF?

QUESTION 20) What is the difference between XRR and GIXRF?

QUESTION 21) Why are used different ion primary beams?

QUESTION 22) Which kind of analyzers are used in SIMS mass spectrometry?

QUESTION 23) By which parameters depend the depth resolution in SIMS


depth profile?

QUESTION 24) What is the range of lateral resolution in Static SIMS?

QUESTION 25) Considering an electromagnetic mode propagating along a


planar interface perpendicular to z-axis, and along the x-direction, please define
the TE or TM propagation modes. Which mode is the only one allowed in SPPs
propagation?

QUESTION 26) The most important characteristic of SPP is the field


confinement at the metal/insulator interfaces. What is the main parameter that
quantify the field confinement? What is order of magnitude of the vertical
confinement in real metal/insulator structures?

QUESTION 27) Please define the surface surface plasmon frequency and infer
its analytical expression for ideal metals (without damping) with real dielectric

function equal to ( )=1−


~ 265 ~
QUESTION 28) Pure SPP modes are not coupled to radiative modes. However,
different geometries of metals, such as nanostructures, support surface plasmons
while simultaneously coupling to the radiative field. Please list some of these
structures and explain the working principle.

QUESTION 29) Please explain the working principle of a plasmonic


refractometric sensors based on a prism. Which metal is used to coat the prism?
What is the typical metal thickness?

QUESTION 30) What would you consider doing if you wanted to make a
nanomaterial fit to your profession? Why do you explain it? What changes do
you make in your design when you think about using this design at the same
time in the field of biotechnology? Please explain.

QUESTION 31) Why do we need nanotextiles?

QUESTION 32) What is nanotextile and how many production methods there
are?

QUESTION 33) How nanofiber and yarn are produced?

QUESTION 34) What types of properties can textile surfaces gain from
nanostructures?

QUESTION 35) What are differences between passive and ultra smart tex-tiles?

QUESTION 36) What other properties you think can the super smart textiles
have? Please write an essay on this topic.

QUESTION 37) Which of the following properties is not effective in the use of
nanomaterials for sensors?

a) Biocompatibility

b) Large surface area

c) Conductivity

d) Low surface energy

e) Small size

~ 266 ~
QUESTION 38) Which of the following is not a fundamental function of nano-
materials in the fields of electrochemical sensors and biosensors?

a) Immobilization of biomolecules

b) Catalysis of electrochemical reactions

c) Increased electron transfer between electrode surfaces and proteins

d) Labeling of biomolecules

e) Solvent effect

QUESTION 39) Which of the following is not a nanoparticles used in


membranes for water treatment applications?

a) Quantum dots

b) Nano Ag

c) Nano zeolites

d) Nano magnetite

e) Aquaporin

QUESTION 40) Which of the following is not a method used in sustainable


application field?

a) Adsorption

b) Photocatalysis

c) Hydrogen storage

d) Disinfection

e) Membrane process

~ 267 ~
QUESTION 41) Which of the following is not a method used for removing envi-
ronmental pollutants from water, soil and air?

a) reduction

b) photocatalysis

c) adsorption

d) oxidation

e) all of them

QUESTION 42) What type of packaging are used in normal life for Nano
product?

a) active, passive, smart and intelligent;

b) smart and intelligent;

c) eco, smart, active and passive.

QUESTION 43) What is the significance of PDCA cycle need to implement the
nanotechnology

a) plan, do, check and act;

b) product, done, cost and activity;

c) product, deliver, cost, act,

QUESTION 44) Nanotechnology applications for food packaging offer a


number of benefits:

a) Innovative, improved, Intelligent’ packaging concepts may:

b) enhance food safety and hygeine in the supply chain;

c) reduce food waste by extending shelf-life of food products;

d) improve poor performance of biopolymers

~ 268 ~
QUESTION 45) Packaging functions are:

protection,

conservation,

ease of use,

communication (through graphics, labeling)

sales facilitation.

QUESTION 46) Choose the correct answer

True False

The nano-sensors embedded into food products as tiny chips that are invisible to
the human eye, would also act as electronic barcodes.

QUESTION 47) Implication of Nanotechnology can be a positive or negative :

a) Risk;

b) Benefits;

c) Eficiency;

d) Effort.

QUESTION 48) Types of Nanotechnology Standards Developed are:

a) ISO Standards and norms;

b) Technical reports and technical specifications

c) ISO standards, TR and TS specifications.

~ 269 ~
QUESTION 49) Choose the correct answer

True False

A standard is a document that not provides requirements, specifica-tions,


guidelines or characteristics that can be used consistently to ensu-re that
materials, products, processes and services are fit for their pur-pose.

QUESTION 50) Which form the following stabdards are for nanotechology

a) ISO 9000-9004;

b) ISO 14001;

c) ISO 22000;

d) ISO TC 229.

QUESTION 51) The elaboration of a guide in a simple language on


nanotechnol-ogies which will allow those who are not initiated in the field to
acquire a practical understanding of the use and application of nanotechnologies
is:

a) ISO/ TC 229

b) ISO/TR 18401

~ 270 ~

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy