DTE Final Notes
DTE Final Notes
BM8601
DIAGNOSTIC AND THERAPEUTIC EQUIPMENT- I
Subject notes
PREPARED BY
Mrs. M. Saranya
Electrocardiograph:
ECG deals with the study of electrical activity of heart muscles. The potentials originated
in the individual fibers of heart muscle are added to produce the ECG waveform.
ECG Lead Configuration:
Surface electrodes are used with jelly as electrolyte between skin and electrodes. The
potential generated in the heart are conducted to the body surface.
Types:
1. Bipolar (or) standard limb leads
2. Augmented (or) unipolar limb leads
3. Chest lead system
Bipolar lead system:
The potentials are tapped from four locations of the body. They are right arm, left arm,
right leg and left leg. The right leg electrode act as a reference ground electrode.
1
In the front panel of the body the cardiac electric field vector is a two dimensional one.
The ECG measured from any of the three limb leads is a time variant single dimensional
component of that vector. It is called as Einthoven triangle.
2
In chest leads the electrode is obtained from one of the chest electrodes. The chest
electrodes are placed on the six different points on the chest closed to the heart.
V5 --- same level as V4 on anterior auxillary
line
V6 --- same level as V4 on mid auxillary
line
Colour codes:
Right arm: white
Pre amplifier:
The pre amplifier is a three stage or four stage differential amplifier having a sufficiently
large negative current feedback. From the end to the first stage gives a stabilizing effect
3
The amplified output is picked up single ended and is given to power amplifier.
Power amplifier:
The power amplifier is generally of push pull differential type.
The base of one input transistor of this amplifier is driven by the preamplifier
unsymmetrical signal.
The base of other transistor is driven by the feedback signal resulting from the pen
position and connected via selective network.
Pen motor:
The output of the power amplifier is single ended and is fed to the pen motor, which
deflects the writing arm on the paper.
Frequency selective network
It is a RC network, which provide necessary damping of the pen motor and is pressed by
the manufacturer.
Auxillary circuits:
The auxillary circuit provide a 1mv calibration signal and automatic blocking of the
amplifier during a change in position of the lead switch. It may include a speed control
circuit.
Standby mode of operation is provided. In which stylus moves in response to input
signals but the paper removes the stationary.
Graph paper horizontal and vertical lines at imm interval, thicker at 5mm interval speed
of paper 25mm/s.
Holter Monitor:
A Holter monitor is a device commonly used to keep track of your heart rhythm.
Your doctor can use a Holter monitor to keep track of your heart function if you’re
having heart problems or they think there may be a problem.
Make sure to engage in your normal activities and keep the Holter monitor dry while
your heart is being monitored. Your doctor will remove the monitor when testing is
finished and carefully go over your results to determine the next step.
A Holter monitor is a small, battery-powered medical device that measures your heart’s
activity, such as rate and rhythm.
4
Twenty-four hour Holter monitoring is a continuous test to record your heart’s rate and
rhythm for 24 hours. You wear the Holter monitor for 12 to 48 hours as you go about
your normal daily routine.
This device has electrodes and electrical leads exactly like a regular EKG, but it has
fewer leads. It can pick up not only your heart’s rate and rhythm but also when you feel
chest pains or exhibit symptoms of an irregular heartbeat, or arrhythmia.
Holter monitor testing is also sometimes called ambulatory electrocardiography.
Uses for Holter Monitoring
An EKG is a medical test that’s used to measure your heart rate and rhythm.
It’s also used to look for other abnormalities that may affect normal heart function.
During an EKG, electrodes are placed on your chest to check your heart’s rhythm.
Abnormal heart rhythms and other types of cardiac symptoms can come and go.
Monitoring for a longer period of time is necessary.
The Holter monitor lets your doctor see how your heart functions on a long-term basis.
The recordings made by the monitor help your doctor determine if your heart is getting
enough oxygen or if the electrical impulses in the heart are delayed.
Working:
The Holter monitor is small. It’s slightly larger than a deck of playing cards. Several
leads, or wires, are attached to the monitor.
The leads connect to electrodes that are placed on the skin of your chest with a glue-like
gel.
The metal electrodes conduct your heart’s activity through the wires and into the Holter
monitor.
It’s important to keep the monitor close to your body during the testing period to make
sure the readings are accurate.
Doctor will show how to reattach electrodes if they become loose or fall off during the
testing period.
The patient can participate in your normal activities during the Holter 24-hour test. You’ll
be directed to record your activities in a notebook.
This helps your doctor determine if changes in heart activity are related to your behaviors
and movements.
5
Wearing the Holter monitor itself has no risks involved. However, the tape or adhesives
that attach the electrodes to your skin can cause mild skin irritation in some people.
Make sure to tell the technician that attaches your monitor if you’re allergic to any tapes
or adhesives.
A 24-hour Holter monitor test is painless. However, be sure to record any chest pain,
rapid heartbeat, or other cardiac symptoms you have during the testing period.
Accuracy of Testing:
Keep the Holter monitor dry to ensure it functions properly. Take a bath or shower before
your appointment to have the monitor fitted and don’t apply any lotions or creams. Avoid
activities that might lead to the monitor getting wet. Magnetic and electrical fields may interfere
with the function of the Holter monitor. Avoid areas of high voltage while wearing the monitor.
Understanding the Results:
After the recommended testing time frame has passed, you’ll return to your doctor’s
office to have the Holter monitor removed. Doctor will read your activity journal and analyze the
results of the monitor. Depending on the results of the test, you may need to undergo further
testing before a diagnosis is made.
The Holter monitor may reveal that your medicine isn’t working or your dosage needs to
be altered if you’re already taking medication for an abnormal heart rhythm. Wearing a Holter
monitor is painless and one of the best ways to identify potential heart problems or other issues.
ARRHYTHMIA SIMULATOR
Introduction
The automatic analysis of the electrocardiogram (ECG) has become an advisable tool in
the long-term monitoring ECG devices like holters for cardiac arrhythmia and for ventricular
fibrillation detection. The built-in algorithm for automatic ECG interpretation must match the
accuracy of specialists. The general society of software developers usually applies the commonly
accepted databases during initial development and testing of the PC based algorithms for
automatic diagnosis. However, the testing and validation of these software methods implemented
in real ECG devices demand of precise long-term ECG simulators. The aim of the present work
was to develop a portable one-channel ECG simulator prototype, needed for testing of automatic
defibrillators. It should be able to generate analog signals of hour’s duration and to reproduce the
6
amplitude and frequency accuracy of the standardised training databases. By including normal
sinus rhythms, different arrhythmia types and test waveforms, the proposed device becomes also
suitable both for medical equipment maintenance and educational purposes.
Hardware description
7
Maxim) and optocouplers (PC714 - Toshiba) provide an optical isolated connection with PC
during the configuration mode of the ECG simulator. The connection is guaranteed with bit rate
of 57600.
CARDIAC PACEMAKERS
Introduction
Pacemaker Circuit
8
Power Supply
Lithium iodide cell used as energy source
Open-circuit voltage of 2.8V
Lithium iodide cell provides a long-term battery life
High source impedance
Output Circuit
Output circuit produces the electrical stimuli to be applied to the heart
Stimulus generation is triggered by the timing circuit
Constant-voltage pulses
Typically rated at 5.0 to 5.5V for 500 to 600μs
Constant-current pulses
Typically rated at 8 to 10mA for 1.0 to 1.2ms
Asynchronous pacing rates of 70 to 90 beats per min; non-fixed ranges from 60 to
150bpm
9
Leads
Important characteristics of the leads
Good conductor
Mechanically strong and reliable
Must withstand effects of motion due to beating of heart and movement of body
Good electrical insulation
Inter wound helical coil of spring-wire alloy molded in a silicone rubber or polyurethane
cylinder
Coil minimizes mechanical stresses
Multiple strands prevent loss of stimulation in event of failure of one wire
Soft coating provides flexibility, electrical insulation and biological compatibility
Electrodes
1. Unipolar Pacemakers
Single electrode in contact with the heart
Negative-going pulses are conducted
A large indifferent electrode is located elsewhere in the body to complete the circuit
2. Bipolar Pacemakers
Two electrodes in contact with the heart
Stimuli are applied across these electrodes
Stimulus parameters (i.e. voltage/current, duration) are consistent for both
10
TYPES
1. Synchronous Pacemakers
Used for intermittent stimulation as opposed to continuous stimulation as in
asynchronous pacemakers
Used for variable rates of pacing as needed based on changes in physiological demand
2. Demand Pacemakers
Consists of asynchronous components and feedback loop
Timing circuit runs at a fixed rate (60 to 80 bpm)
After each stimulus, timing circuit is reset
Normal cardiac rhythms prevent pacemaker stimulation
3. Atrial-Synchronous Pacemaker
SA node firing triggers the pacemaker
Delays are used to simulate natural delay from SA to AV node (120ms) and to create a
refractory period (500ms)
Output circuit controls ventricular contraction
Combining the demand pacemaker with this design allows the device to let natural
SA node firing to control the cardiac activity
ELECTROCARDIOGRAPH
Electrical activity of the heart
The electrocardiogram (ECG) is based on the electrical activity of the heart muscle cells.
In the resting stage, the inside of the cardiac cells has a negative charge compared to the outside
of cells. The resulting voltage difference between the internal and external spaces of the cell
membrane is called transmembrane potential (-80 to –90 mV in cardiac muscle cells). The
discharging of this voltage (depolarization) in the heart muscle cells is presupposition to the
start of the contraction in the heart muscle cell fibers. During the contraction the cell redevelops
the same voltage difference (repolarization) over the cell membrane as before.
11
Electrocardiogram recorded from the skin surface does not, however, register the
depolarization or repolarization of individual cells. Instead, surface ECG is created when the
depolarization (activation) and the following repolarization spread in the whole heart muscle,
together producing an electrical total component, which is measured at the skin surface. This
cardiac electrical vector has at all times direction and amplitude specific to the activation stage.
The signal registered at the skin surface originates from many simultaneously propagating
activation fronts at different locations, which affects the size of the total component. Other
factors affecting the total vector include the amount of activated muscle cells and the directions
of activation fronts spreading simultaneously at different locations in the heart.
The activation of the ventricles starts simultaneously from the right and left side of the
septum continuing, however, mostly from left to right. After that, the activation front propagates
along the septum towards the apex of the heart. The front-, lower- and rear walls of the both
ventricles are activated next so that the activation is directed from the inner to the outer part of
the ventricle wall or from endocardium to epicardium. The rear-upper parts of both ventricles
and septum depolarize last. The stage of repolarization starts from the outer cells and proceeds as
a front towards the inner wall.
Lead systems
12
Many lead systems have been developed for research purposes, but established systems
for the clinical usage are rather few due to both international agreements and historical reasons.
The advantage of standardized and widely used lead systems is that they are comparable
geographically and temporally.
Unipolar leads
Unipolar leads are based on Wilson's center terminal (WCT), which is used as a reference
instead of one single reference electrode. The WCT represents a 'mean electrode' calculated from
the three limb electrodes. The term "unipolar" originates from Wilson's aim to develop an
indifferential electrode locating at the center of heart. By removing the lead used as the
measuring electrode from the Wilson's central terminal, Goldberger invented in 1942 augmented
unipolar lead system. These augmented leads are far from Wilson's original idea, but even in
spite of that they have become part of the most commonly used clinical standard.
13
Bipolar chest leads
In the bipolar chest leads, each potential recorded by a chest electrode is compared with a
particular single reference electrode. Fig. 5 illustrates various reference electrode positions
intended for use in different kinds of measurements. Blank circles indicate the anterior
(abdomen) side and the black spots indicate the posterior (back) side. With suitable choice of the
reference position, the amplitude of measuring results and the sensitivity of the connections can
be affected. Also the influence of muscle activity during the different types of exercise tests
affects the choice of the reference position.
MEASUREMENTS
1. Measuring blood pressure
Measure the systolic and diastolic blood pressure of a subject (patient) in a sitting
position using a sphygmomanometer (blood pressure apparatus) to be found in the laboratory.
2. Entering patient data
Initiate subject data entry to the ECG-analyzer (Mac-12) by pressing function key Fl
(PatInfo). Type the name and date of birth (Patient's ID) of the subject. Next enter your name in
14
answer to the question "Referred By". To the question "Location Number" type 0 and to question
"Room number" type H213. The answers to the questions regarding age, height, weight, sex, and
race (probable alternative being "Cauc" or "Unknown") must be obtained from the subject. The
remaining questions refer to medication and the measured blood pressure values.
3. Placing of electrodes
Locate the positions of electrodes, remove any dead skin tissue (do this by lightly rubbing the
skin with an abrasive pad), and place electrodes individually. Use the disposable electrodes,
provided by the laboratory assistant, on the chest and forklike electrodes on the limbs. It is
advisable to apply electrode paste to the limb electrodes. There are four limb electrodes, which
are to be placed on the wrists and ankles. Also needed are the 6 chest electrodes for the standard
12-lead connection. The four extra electrodes, which are normally used for the vector lead
system.
15
HEART RATE MONITOR
Introduction
Average calculation
o An average rate is calculated by counting the number of pulses in a given time.
Beat-to-beat calculation
o This is done by measuring the time (T) in seconds between two consecutive
pulses.
Combination offbeat-to-beat calculation with average
o This is based on 4 or 6 beats average.
16
Average heart rate meters
The heart rate meters which are part of the patient monitoring systems are usually of the
average reading type.
Calculation of heart rate from patients ECG is based upon the reliable detection of the
QRS complex. A method to reduce false alarm is by using QRS method filter. The filter
therefore would have maximum absolute out[put similarly shapes waveforms are I/ps.
17
The ECG is sampled every 2 ms. Fast transition and high amplitude components are
attenuated by a slew rate limiter which reduces the amplitude of pacemaker artifacts and
impulses.
This produces 8 ms samples in the process. Any DC offset with the signal is removed by
1.25 Hz HPF. Based on the power spectra estimation of the QRS complex.
Ultimate aim of detecting the R-wave is to automate the interpretation of ECG and detect
arrhythmias.
18
DEFIBRILLATORS
Introduction
Used to reverse fibrillation of the heart
Fibrillation leads to loss of cardiac output and irreversible brain damage or death if not
reversed within 5 minutes of onset
Electric shock can be used to reestablish normal activity
Four basic types of defibrillators
AC defibrillator
Capacitive-discharge defibrillator
Capacitive-discharge delay-line defibrillator
Rectangular-wave defibrillator
Defibrillation by electric shock is carried out by passing current through electrodes
placed:
Directly on the heart – requires low level of current and surgical exposure of the heart
Transthoracically, by using large-area electrodes on the anterior thorax – requires
higher level of current
Capacitive-Discharge
A short high-amplitude defibrillation pulse is created using this circuit
iThe clinician discharges the capacitor by pressing a switch when the electrodes are
firmly in place
Once complete, the switch automatically returns to the original position
19
Power Supply
Using this design, defibrillation uses:
50 to 100 Joules of energy when electrodes are applied directly to the heart
Up to 400 Joules when applied externally
Energy stored in the capacitor follows:
Capacitors used range from 10 to 50μF
Voltage using these capacitors and max energy (400J) ranges from 1 to 3 kV
Energy loss result in the delivery of less than theoretical energy to the heart
Lithium silver vanadium pentoxide battery is used
Rectangular-Wave
Capacitor is discharged through the subject by turning on a series silicon-controlled
rectifier
When sufficient energy has been delivered to the subject, a shunt silicon-controlled
rectifier short-circuits the capacitor and terminates the pulse, eliminating a long discharge
tail of the waveform
20
Output control can be obtained by varying:
Voltage on the capacitor
Duration of discharge
Advantages of this design:
Requires less peak current
Requires no inductor
Makes it possible to use physically smaller electrolytic capacitors
Does not require relays
Output Pulses
Monophasic pulse width is typically programmable from 3.0 to 12.0 msec
Biphasic positive pulse width is typically programmable from 3.0 to 10.0 msec, while the
negative pulse is from 1.0 to 10.0 msec
Studies suggest that biphasic pulses yield increased defibrillation efficacy with respect to
monophasic pulses
Electrodes
Excellent contact with the body is essential
Serious burns can occur if proper contact is not maintained during discharge
Sufficient insulation is required
Prevents discharge into the physician
Three types are used:
Internal – used for direct cardiac stimulation
External – used for transthoracic stimulation
Disposable – used externally
Cardioverters
Special defibrillator constructed to have synchronizing circuitry so that the output occurs
immediately following an R wave
In patients with atrial arrhythmia
The design is a combination of a cardiac monitor and a defibrillator
21
PHONOCARDIOGRAPHY
Heart Sounds
Heart sounds are vibrations or sounds due to the acceleration or deceleration of blood
during heart muscle contractions, whereas murmurs (a type of heart sounds) are considered
vibrations or sounds due to blood turbulence. Phonocardiography is the recording of heart
sounds.
The auscultation of the heart provides valuable information to the clinician concerning
the functional integrity of the heart.
The first heart sound is generated at the termination of the atrial contractions, just at the
onset of ventricular contraction. This sound is generally attributed to movement of blood into the
ventricles, the artioventricular (AV) valves closing, and the sudden cessation of blood flow in the
atria. Splitting of the first heart sound is defined as an asynchronous closure of the tricuspid and
the mitral valves.
22
The second heart sound is a low frequency vibration associated with the closing of the
semi lunar valves - the aortic and pulmonary valves. This sound is coincident with the
completion of the T wave of the ECG.
The third heart sound corresponds to the sudden cessation of the ventricular rapid
filling. This low-amplitude, low frequency vibration is audible in children and in some adults.
The fourth heart sound occurs when the atria contracts and propel blood into the
ventricles. This sound with very low amplitude and low frequency is not audible, but may be
recorded by the phonocardiography (PCG).
The sources of most murmurs, developed by turbulence in rapidly moving blood, are
known. Murmurs are common in children during early systolic phase; they are normally heard in
nearly all adults after exercise. Abnormal murmurs may caused by stenoses and insufficiencies
(leaks) at the aortic, pulmonary, and mitral valves. They are detected by noting the time of their
occurrence in the cardiac cycle and their location at the time of measurement.
23
Heart sounds travel through the body from the heart and major blood vessels to the body
surface. The physician can hear those sounds with a stethoscope. Basic heart sounds occur
mostly in the frequency range of 20 to 200 Hz. certain heart murmurs produce sounds in the
1000-Hz region, and some frequency components exist down to 4 or 5 Hz. Some researchers
even reported that heart sounds and murmurs have small amplitudes with frequencies as low as
0.1 Hz and as high as 2000 Hz.
Phonocardiography
Phonocardiography is a mechano-electronic recording technique of heart sounds and
murmurs. It is valuable in that it not only eliminates the subjective interpretation of these sounds,
but also makes possible an evaluation of the heart sounds and murmurs with respect to the
electrical (such as ECG) and mechanical (carotid pulse recorded in the midneck region) events in
the cardiac cycle. It is also valuable in locating the sources of various heart sounds.
A PCG machine is usually consist of four main parts: a microphone or PCG transducer,
filtering (mechanical and electrical), processing unit, and display. For wireless PCG, it will have
a transmitter, a receiver and an interface between the PCG transducer and the transmitter.
The PCG transducer is a contact or air-coupled acoustical microphone held against the
patient's chest. Various types of microphones are used, but most are the piezoelectric crystal or
dynamic type of construction.
Figure: The piezoelectric (crystal) microphone is more sensitive than the dynamic (based on
Faraday's principle) microphone at low frequencies. The crystal microphone can be used for
24
essentially all PCG measurements. The dynamic microphone, on the other hand, has a higher
output voltage than the crystal. But it cannot be used for pulse-wave recordings because of its
inadequate low frequency response.
Crystal microphone
The crystal microphone generally costs less. The dynamic microphone uses a moving coil
coupled to the acoustical diaphragm. The dynamic microphone is used when it is desirable to
have a signal frequency response similar to that of the mechanical stethoscope. An air-coupled
crystal microphone with a time constant of 2-s is often used for apex PCG recordings.
Mechanical filtering of heart sounds and murmurs is possible by a careful selection of the
size of the diaphragm and microphone bell. The larger the diameter of the diaphragm, the lower
the maximum frequency response of the system. Electronic filtering can be used to selectively
record or listen to desired frequency bands.
Signal processing/conditioning is used to reproduce the heart sounds over the actual
frequency range (25-2000 Hz for heart sounds and murmurs, and 0.1-100 Hz for pulse waves). It
is possible to compensate for an inadequate frequency response in the recording device by pre-
emphasizing these heart-sound frequencies in the amplifier.
Spectral analysis has been used successfully in studying PCG signals. This method allows for an
analysis of the instantaneous sound energy during a given time interval (say, 5-ms) throughout
the heartbeat. The spectral features of PCG signal can be used for studying arterial and other
heart diseases.
25
There are optimal recording sites for the various heart sounds or PCG signals. Because of
the acoustical properties of the transmission path, heart sound waves are attenuated but not
reflected.
26
27
28
29
30
31
UNIT 2
INTRODUCTION
The first recording of the electric field of the human brain was made by the German
psychiatrist Hans Berger in 1924 in Jena. He gave this recording the name electroencephalogram
(EEG). (Berger, 1929).(From 1929 to 1938 he published 20 scientific papers on the EEG under
the same title "Über das Elektroenkephalogram des Menschen".)
1. Spontaneous activity,
2. Evoked potentials, and
3. Bioelectric events produced by single neurons.
The internationally standardized 10-20 system is usually employed to record the spontaneous
EEG. In this system 21 electrodes are located on the e of the scalp, as shown in Figure 13.2A and
B. The positions are determined as follows: Reference points are nasion, which is the delve at the
top of the nose, level with the eyes; and inion, which is the bony lump at the base of the skull on
the midline at the back of the head. From these points, the skull perimeters are measured in the
transverse and median planes. Electrode locations are determined by dividing these perimeters
into 10% and 20% intervals. Three other electrodes are placed on each side equidistant from the
neighboring points, as shown in Figure 13.2B (Jasper, 1958; Cooper, Osselton, and Shaw, 1969).
In addition to the 21 electrodes of the international 10-20 system, intermediate 10%
electrode positions are also used. The locations and nomenclature of these electrodes are
standardized by the American Electroencephalographic Society (Sharbrough et al., 1991; see
Figure 13.2C). In this recommendation, four electrodes have different names compared to the 10-
20 system; these are T7, T8, P7, and P8. These electrodes are drawn black with white text in the
figure.
Besides the international 10-20 system, many other electrode systems exist for recording
32
electric potentials on the scalp. The Queen Square system of electrode placement has been
proposed as a standard in recording the pattern of evoked potentials in clinical testing
(Blumhardt et al., 1977).
Bipolar or unipolar electrodes can be used in the EEG measurement. In the first method
the potential difference between a pair of electrodes is measured. In the latter method the
potential of each electrode is compared either to a neutral electrode or to the average of all
electrodes.
. The international 10-20 system seen from (A) left and (B) above the head. A = Ear lobe,
C = central, Pg = nasopharyngeal, P = parietal, F = frontal, Fp = frontal polar,
O = occipital. (C)
33
eyes are closed, the alpha waves begin to dominate the EEG. When the person falls asleep, the
dominant EEG frequency decreases. In a certain phase of sleep, rapid eye movement called
(REM) sleep, the person dreams and has active movements of the eyes, which can be seen as a
characteristic EEG signal. In deep sleep, the EEG has large and slow deflections called delta
waves. No cerebral activity can be detected from a patient with complete cerebral death.
Examples of the above-mentioned waveforms are given in Figure 13.6..
Evoked potential:
Electrical potentials that occur in the cortex after stimulation of a sense organ which can
be recorded by surface electrodes is known as Evoked Potential.
eg. SEP, ABR and VEP
• Evoked potentials test and record how quickly and completely the nerve signals reach the
brain. Evoked potentials are used because they can indicate problems along nerve
pathways that are too subtle to show up during a neurologic examination or to be noticed
by the patient. The disruption may not even be visible on MRI exam.
• These tests can be helpful in making the diagnosis of multiple sclerosis (MS) and other
neurological disorders.
34
• Evoked potential amplitudes tend to be low, ranging from less than a microvolt to several
microvolts, compared to tens of micro volts for EEG, mill volts for EMG, and often close
to a volt for ECG.
• Signals can be recorded from cerebral cortex, brain stem, spinal cord and peripheral
nerves. Usually the term "evoked potential" is reserved for responses involving either
recording from, or stimulation of, central nervous system structures.
Types:
• Visual Evoked Potentials (VEP)
• Auditory Evoked Potentials (AEP)
• Somato Sensory Evoked potential (SSEP)
• Median Nerve Sensory Evoked Potentials (MNSEP)
• Posterior Tibial Nerve Sensory Evoked Potentials (PTNSEP)
Visual Evoked Potential (VEP):
• The VEP tests the function of the visual pathway from the retina to the occipital cortex.
• It assesses the integrity of the visual pathways from the optic nerve, optic chiasm, and
optic radiations to the occipital cortex.
• Visual evoked potential (VEP) tests evaluate how the visual system responds to light.
• VEP tests used to evaluate optic neuritis, optic tumors, retinal disorders, and
demyelenating diseases such as multiple sclerosis .
• Stimulus: checkerboard pattern on a TV monitor
• The black and white squares are made to reverse
• A pattern-reversal rate – from 1to 10 per second
• Electrodes - 3 standard EEG electrodes are placed.
• Analysis time (one epoch) is 250 ms
35
• Number of trials 250 , 2 tests at least to ensure that the waveforms are replicable.
VEP Procedure:
• Subject placed in front of a computer screen, which shows a pattern of white and black
squares like a chessboard, and a red dot in the middle that he/she supposed to focus the
eyes on with minimal movement.
• procedure is done one eye at a time, with the eye that is not being tested blocked off with
an eye patch.
• During the actual procedure, these squares alternate (white ones become black, black
ones become white) at a rate of several times a second, which produces responses in the
visual cortex, which is picked up by the skull electrodes.
• Since the computer controls the exact timing of the changes of the square colors, and
receives the exact timing of the electric response in the corresponding electrodes, it is
able to determine precisely the amount of time it takes for the visual stimulus to reach the
visual cortex
• After the stimulus are over you will get NPN complex.
• Identify the waves & apply the wave markers. The values will appear in the table.
• Repeat the procedure & get another record.
• Display both the recordings and superimpose them to show the reproducibility of the test
results. Repeat the procedure for other eye.
Waveforms (The NPN complex):
36
• The initial negative peak (N1 or N75)
• A large positive peak (P1 or P100)
• Negative peak (N2 or N145)
Interpretation:
• Negative components of NPN complex may be absent even in normal subject. The only
persistent wave is P100.
• Maximum Value for P100
P100 is 110 milliseconds (ms) in patients younger than 60 years (it rises to 120
ms thereafter in females and 125 ms in males. )
37
Wave Origin
I Cochlear nerve
V Inferior colliculus
38
Requirements:
• Clinical averager
• Ear phone
• Silver cup electrodes
• Electrode jelly
• Patient leads
Procedure:
• Subject lying supine with a pillow under his head.
• Room should be quite.
• Clean the scalp & apply electrode.
• Check the impedance.
• Apply the ear phone (red for the right ear & blue for the left ear)
• Select the ear in the stimulator & apply masking to the opposite ear.
• Stimulation rate : 11/sec.
• Repetition : 2000
• Find out the threshold of hearing.
• ABR should be done at around 80dB.
• Start averaging process & continue until the required repetition accomplished.
• Calculate the peak – interpeak latencies for the ABR waves.
39
Normal values:
• Peak latency of a wave = less than the next higher no. wave
• Or just add 1 to that wave, latency will be less than that.
Wave Latency
I <2mSec.
II <3 m.sec
IV <5 m.sec
V <6 m.sec
VI <7 m.sec
Interpretation:
• Wave I : small amplitude, delayed or absent may indicate cochlear lesion
• Wave V : small amplitude, delayed or absent may indicate upper brainstem lesion
• I – III inter-peak latency: prolongation may indicate lower brainstem lesion.
• III – V inter-peak latency: prolongation may indicate upper brainstem lesion.
• I – V inter-peak latency: prolongation may indicate whole brainstem lesion. Shortening
of wave the interval with normal latency of wave V indicate cochlear involvement.
Applications:
• Identifying the hearing loss
• Classification of type of deafness (conductive or sensorineural)
40
Somato Sensory Evoked Potential (SSEP):
UPPER SEP (arms)
• Two stimulus electrodes are attached on the inside wrist, closer to the thumb,electrodes
receive timed electric pulses that will produce an involuntary twitch of the thumb.
• An additional sensor electrode is applied on the back of the shoulder, close to the
attachment point of the clavicle.
• Similar to the VEP, the computer times the electric pulses (which come at a rate of
several times a second) and gets the responses from the appropriate skull electrode, thus
determining the exact time it takes for the stimulus to reach the intermediate point on the
shoulder, and then the brain.
• The same is repeated for the other arm.
LOWER (SEP) (legs)
• Two stimulus electrodes are attached to the inside of your ankle, in such a way as to
produce an involuntary twitch of the big toe.
• Additional sensor electrodes are placed at the back of the knee (closer to the outside), on
the spine of the lower back, and on the spine of the upper back.
• Electric pulses are then sent at a rate of several times a second, and the responses are
recorded in the same manner as above.
Event Related Potentials
• Event-related potentials are patterned voltage changes embedded in the ongoing EEG that
reflect a process in response to a particular event (e.g., visual or auditory stimuli).
• ERPs are measured from the same “raw data” (i.e., scalp electrical activity over time and
space) as EEG
41
VISUAL EVOKED POTENTIALS
INTRODUCTION
CHOICE OF STIMULUS
Patterned visual stimuli elicit responses that have far less intra- and interindividual
variability than responses to unpatterned stimuli. PVEP testing will detect minor visual pathway
abnormality with much greater sensitivity and accuracy than FVEP testing.
Checkerboard pattern reversal is the most widely used pattern stimulus because of its
relative simplicity and reliability. Grid and sinusoidal grating stimuli will also produce clinically
reliable test results. Unpatterned stimuli are generally reserved for patients who are unable to
fixate or to attend to the stimulus. Unpatterned stimuli are also useful for the study of steady-
state VEPs.
In pattern stimulation, the selection of check size, field size, and field location allow
selective testing of specific segments of the visual pathway. The stimulus employed should be
appropriate to the patient’s individual clinical circumstances. The use of more than one stimulus
may be advantageous.
SUBJECT CONDITIONS
The ability of the subject to focus on and to resolve the pattern is critical to PVEP testing.
Defocusing of the pattern will affect response latency, amplitude, and waveform. Cycloplegics
generally should not be used for clinical testing. Subjects with refractive errors should be tested
with appropriate corrective lenses. Visual acuity should be measured in all subjects. Fatigue may
affect the subject’s ability to maintain focus on close objects. To avoid this effect, the subject
42
should not be placed closer than 70cm to the stimulus. Any grossly apparent visual field defect
should be measured by confrontation testing and noted in the test results.
Test results should demonstrate the peak positive response to pattern reversal occurring at
the occipital with a latency of approximately 100 ms with its topographic distribution into
temporal and midline regions.
Pattern stimuli may be produced by a variety of methods. The pattern may be back-
projected by way of a rotating mirror to a translucent screen. Abrupt movement of the mirror
produces a shift of pattern on the screen. The most commonly used technique is to produce
computer-generated patterns on a video monitor. Oscilloscope and light-emitting diode (LED)
display stimuli are also available. The speed of pattern change will differ among the stimulus
types. Response latencies will depend on the stimulus type and speed of pattern change.
TYPE OF PATTERN
Checkerboard patterns have been the most extensively studied and used in clinical
testing. Bar and sinusoidal grating stimuli also produce clinically useful responses. Sinusoidal
grating stimuli have been reported to be less affected by refractive errors than checkerboard
stimuli.
ELECTRODE PLACEMENT
Both the Queen Square System of placement (occipital leads labeled LO, MO, and RO)
and the International 10-20 System placement (leads O1, Oz, and O2) have been used for routine
testing. The Queen Square System is demonstrably superior because the lateral occipital leads
are placed farther from the midline than in the 10—20 System. This allows improved recording
43
of the scalp distribution of the PVEP in adults to partial field stimulation and to full-field
stimulation in subjects with partial visual pathway lesions (Blumhardt and Halliday, 1979;
Blumhardt et al., 1982). The International 10—20 System Fz placement is on average 11 cm
above the nasion, whereas the Queen Square System MF electrode location is 12 cm above
nasion (Chatrian et al., 1980). This minimal location difference should produce no detectable
response difference.
In the Queen Square System, the electrodes are labeled and positioned as follows:
MO: Midoccipital, in midline 5 cm above inion
LO and RO: Lateral occipital, 5 cm to left and right of MO
MF: Midfrontal, in midline, 12 cm above nasion
Al/A2: At ear or mastoid, left and right
Ground: At vertex
44
PVEP TO HALF-FIELD STIMULATION
RECORDING MONTAGE
At least four channels should be recorded. With these few channels, the montage
should be changed for each field stimulated (Fig. 3). These montages allow optimal
recording of the P100 at the ipsilateral lateral occipital and midoccipital leads and of the
P75, N105, and P135 at the contra-lateral lateral occipital and temporal leads. The
midfrontal reference is preferred to Al, A2, or Al + A2 because the ear leads may become
asymmetrically active with half-field responses (Jones and Blume, 1985). In subjects
with a prominent midfrontal N100, a midline reference located more anteriorly or a
noncephalic reference might be necessary to avoid confusion in peak identification.
For left half-field stimulation:
Channel 1: Left occipital to midfrontal = LO-MF
Channel 2: Midoccipital to midfrontal = MO-MF
Channel 3: Right occipital to midfrontal = RO-MF
45
Channel 4: Right posterior temporal to midfrontal = RT-MF
For right half-field stimulation:
Channel 1: Left posterior temporal to midfrontal = LT-MF
Channel 2: Left occipital to midfrontal = LO-MF
Channel 3: Midoccipital to midfrontal = MO-MF
Channel 4: Right occipital to midfrontal = RO-MF
FLASH VISUAL EVOKED POTENTIAL TESTING
46
FLASH VEP TO LED GOGGLE STIMULATION.
RECORDING
Recording parameters are identical to those described for PVEP testing with minor
changes in recording montage. In simple screening testing to determine the presence or absence
of a response, a single recording channel is adequate to demonstrate a mid occipital FVEP.
However, if a response is not noted, then multiple channels recording with a long time base
should be performed. A simple four-channel recording montage would be:
Channel 1: Left occipital to reference: LO—Reference
Channel 2: Mid occipital to reference: MO—Reference
Channel 3: Right occipital to reference: RO—Reference
Channel 4: Vertex to reference: V—Reference
The reference can be single or linked ears/mastoids. A frontal
reference closer to the eyes is more likely to be contaminated with electroretinographic (ERG)
activity.
INTRODUCTION
Auditory Evoked Potentials provides an overview of the uses of Evoked Potential (EP)
testing and is intended for anyone interested in learning more about EP. Each chapter in
the book will be issued as a separate document and will be available in printed and online
formats.
♦ CM (cochlear microphone)
47
A stimulus-dependent cochlear response, which changes direction with changing polarity.
Hence, it is not detected when averaging is performed to alternating polarity.
♦ SP (summating potential) direct current response from the Organ of Corti hair cells. SP is
often seen as a leading hump on the AP or wave I, although sometimes it can appear as a
separate hump.
♦ AP (action potential)
Alternating current response generated by the cochlear end of the 8th nerve (wave I). The AP
represents the summed response of thousands of firing auditory nerve fibers.
Clinical Applications
♦ Balance difficulties
Patient Preparation
The patient preparation information in this section applies to all patients, regardless of age or
circumstances. The operator may need to make specific accommodations depending on the
patient status and the test environment.
48
♦ Prepare a patient for testing
Choose an electrode
♦ Ear-canal
also referred to as a tip-trode (gold foil), has a foil covered foam insert
piece that is placed in the external ear canal but does not touch the
eardrum.
♦ Extra tympanic
♦ Transtympanic
Tip-trode
The tip-trode is an ear canal electrode. It is made of foam that is wrapped with a thin gold
film. After the ear is prepared, the tip trode is gently squeezed and then placed in the ear
canal, where it will expand and snuggly seal the ear.
A double cable is used to connect the tip-trode to the patient cable with a safety plug
(DIN Touch Proof Female Safety Jacks). The other end of the tip-trode cable connects to
an ER-3A insert phone transducer box to allow the transmission of the auditory stimulus
into the ear canal.
Tymptrode
49
A tymptrode is an extra tympanic electrode. It consists of a thin wire shielded with a
protective plastic coating. The tymptrode is placed in the ear canal next to the ear drum.
Put conductive gel on the tip of the tymptrode before inserting it into the ear canal.
The tymptrode is led into the ear canal until it reaches the eardrum. When placed
properly, the tymptrode rests gently on the eardrum, and the gel assists in making contact
with the eardrum. The tymptrode cable connects to a Y-cable that plugs into the patient
cable through typical safety plug (DIN Touch Proof Female Safety Jacks). An insert
earphone, placed a short distance into the ear canal, provides the sound stimulus.
You will need some or all of these items to prepare the patient for testing:
♦ Jumper
♦ Prep-paste
♦ Conductive gel
♦ Adhesive tape
50
Test Procedures
♦ Select a protocol
♦ Select parameters
♦ Step-by-step instructions
♦ Tip-trode instructions
♦ Tymptrode instructions
Select a protocol
In the CHARTR EP program, the ECochG procedure has an 80 dB protocol for the left ear. The
instructions in this chapter use the default ECochG as a starting point.
To access the default ECochG protocol on the New Test tab, locate the ECochG procedure and
select the 80 dB ECochG protocol.
Select parameters
The CHARTR EP ECochG default protocol automatically selects the appropriate trial parameter
settings for collecting a typical ECochG. To view the parameter settings, click F5 Trial Settings
to display the Edit Protocol ECochG dialog.
Step-by-step instructions
This section contains two sets of instructions for collecting ECochG data—one using tip-trodes
(gold foil) and one using tymptrodes.
Tip-trode instructions
These instructions will help you connect electrodes to the patient and patient cable
using tip-trodes and collect ECochG data from the left and right ears (starting with the left ear).
51
4. Connect the electrodes to the patient cable.
5. Click F12 Average to begin collecting. Each tracing will be labeled LIxx
Tymptrode instructions
Definition
Presynaptic and postsynaptic responses recorded over the limbs, spine, and scalp
following the stimulation of peripheral nerves, trunks or cutaneous nerve.
Evaluate the integrity of the somatosensory pathways by recording obtained from all
levels of the nervous system: peripheral nerve, spinal cord, brain.
Clinical Applications
52
Prognosis in anoxic coma
Detecting conduction abnormalities along central pathways > peripheral
Objective evidence of CNS dysfunction in the setting of vague sensory symptoms
Evaluating spinal cord integrity after injury
SSEP Generators
Anatomy Exact components of SSEP are still vague (some peak potentials have
overlapping generator sources).
1. Mixed nerve stimulation: sensory and motor.
2. Touch, pressure, position afferents (large diameters fibers from muscle
spindles, golgi tendon organs, and joint receptors, skin)> dorsal columns-
medial lemniscus> VPL/VPM > cortex.
3. Thermal and pain afferents (small diameter fibers from skin, connective
tissue): DRG > (ipsilateral and contralateral) > spinothalamic tract > thalamus
> cortex.
Somatosensory Anatomy
53
Neuropathy
Body habitus
54
N21 8-14ms for Ht 140-190 cm
P37 30-46 ms for Ht. 140 cm to 190 cm
Introduction
A seizure discharge may be initiated in an entirely normal cerebral cortex by a variety of acute
insults, such as withdrawal from alcohol, low blood sodium, or certain toxins. Seizures are to be
distinguished from epilepsy, which is a chronic condition in which seizures occur repeatedly due
to an underlying brain abnormality which persists between seizures. A convulsion is a forceful
involuntary contraction of skeletal muscles. A convulsion is a physical manifestation of a
55
seizure, but the term is inappropriate as a synonym for epilepsy when epilepsy may consist only
of a temporary alteration of consciousness or sensation.
Partial seizures
Partial seizures begin in a discrete cortical area. They are categorized as simple when
consciousness is preserved and complex when consciousness is altered. Simple partial seizures
may evolve into complex partial seizures or secondarily generalized tonic–clonic seizures as a
result of the spread of abnormal electrical activity.
Primary generalized seizures (also called generalized seizures) involve widespread areas of the
cerebral cortex from the onset. These terms must not be confused with the term secondary
generalized seizure, which refers to a partial onset seizure that spreads to wide areas of cortex.
The abnormal electrical activity is the same in both the left and right hemispheres (bilaterally
symmetrical). Generalized seizures are further subdivided into convulsive and non-convulsive
types. Convulsive seizures are characterized by sometimes violent and sustained contractions of
muscles. Non-convulsive seizures lack prominent motor activity. Generalized tonic–clonic,
clonic, and some tonic seizures are referred to as convulsive generalized seizures.
The most common non-convulsive generalized seizure is the absence seizure, but the category
also includes atonic, brief tonic, and myoclonic seizures.
56
A number of terms are widely used in describing the results of epilepsy research and so should
be defined. The epileptogenic focus is a cortical area containing abnormally functioning neurons
which is determined electroencephalographically during the interictal period; thus, it is an
electrophysiological concept. The epileptogenic zone is the area of brain tissue where an
epileptogenic seizure actually begins, but its location can rarely be pinpointed accurately in
patients, so it is largely a theoretical concept. An epileptogenic lesion is a structural concept
denoting, for example, a tumor or scar which gives rise to chronic epileptic seizures. Neither
clinical human nor animal research has yet provided well-under- stood relationships between
these three brain areas, but, importantly, they do not always correspond to one another
anatomically.
Numerous mechanisms have been hypothesized to account for the various types of seizures and
epilepsy. Because pharmacological blockade of GABA-mediated inhibition can trigger interictal
discharges that may lead to ictal events, a long-standing yet controversial hypothesis is that
epileptic seizures are the result of decreased synaptic inhibition. Another hypothesis is that
augmentation of the N-methyl-d-aspartate (NMDA) type of excitatory glutamate receptor
contributes to epileptogenesis. Because secondary or symptomatic epilepsy usually appears to
develop following a latent period of months or even years after an injury, many researchers have
proposed that axonal sprouting and formation of new excitatory synaptic circuits (i.e. synaptic
reorganization) contributes to or is responsible for some forms of epilepsy. Other researchers
have shown in vitro that robust seizure activity can occur without active chemical synapses.
Controversy surrounds all of these hypotheses, and it is likely that future research will delineate
their relative contributions to the different types of seizures and epilepsy.
Just what causes cortical neurons to begin seizure discharge at a particular time is uncertain.
Neurons in an epileptic focus are prone to abnormal burst activity, which would make them more
susceptible to activation by increased body temperature (hyperthermia), decreased oxygen to the
brain (hypoxia), decreased blood sugar (hypoglycaemia), decreased calcium in the blood
(hypocalcaemia), decreased sodium ions in the blood (hyponatraemia) and various behavioral
states. For example, some rare epileptic patients are abnormally sensitive to stimulation by light
57
and will have a seizure when exposed to flashing light. Other patients experience seizures only
during sleep. Still others may have seizures from a lack of normal sleep. The hormonal changes
that occur in women during menstruation may influence seizure susceptibility.
Most epileptic patients benefit from antiepileptic drugs (AEDs). The objective of the therapeutic
management of seizures with medication is to control the seizures with minimal adverse side
effects. The proper diagnosis of seizure type is essential to the selection of an appropriate AED.
If used for the wrong seizure type, some AEDs actually increase seizure activity.
One type of surgical treatment consists of the removal (resection) of an abnormal brain area that
has been identified as likely to be responsible for the seizure onset. It is a procedure applied only
to a select population of people with epilepsy, namely those who have not responded to
aggressive medical therapy with AEDs. These patients are referred to as being medically
intractable. About 80% of all such surgical procedures in adults involve resections of the
temporal lobe, although multilobar resections and even hemispherectomies are sometimes
undertaken for catastrophic epilepsy in children.
MAGNETO ENCEPHALOGRAPHY
INTRODUCTION
The SQUID detectors of magnetic field are housed in a cryogenic container called a Dewar,
which is usually mounted in a movable gantry for horizontal or seated positions. The subject or
58
patient is positioned on an adjustable bed or chair. The SQUID system and patient may or may
not be positioned in a shielded room. MEG measurement is usually supplemented by EEG and
both MEG and EEG signals are transmitted from the shielded room to the SQUID and
processing electronics and the computers for data analysis and archiving. The MEG system also
contains stimulus delivery and its associated computer, which is synchronized with the data
acquisition. The installation is completed with a video camera(s) and intercom for observation of
and communication with the subject in the shielded room. To accomplish accurate localization,
various 3D digitizing methods may be used, e.g., (20), or the MEG system itself may be used for
the head position determination. In that case three small coils are mounted on the subject’s head
at the nasion and preauricular points. The coils are energized from the computer; their magnetic
signals are detected by the MEG system and used to determine the head position. The measuring
procedure has submillimeter accuracy. It is estimated the overall head localization accuracy,
considering all errors, is about 2 or 3 mm.
High-quality detection of brain magnetic fields is the first step in the MEG signal processing
chain. The measured brain fields are small and the only detectors with adequate sensitivity are
SQUID sensors. SQUID sensors exhibit high sensitivity to magnetic fields; however, their
configuration is not best suited for the direct detection of brain fields. SQUIDs are coupled to the
brain fields by means of flux transformers.
2. NOISE CANCELLATION
Noise at the output of MEG sensors is a combination of sensor white noise, brain
noise, and environmental noise. Sensor noise can be minimized to acceptable levels by
careful design of the SQUID and primary flux transformers, and brain noise (if it is
considered noise and not signal) can be controlled or reduced by spatial filtering methods.
Environmental noise is caused by various moving magnetic objects (cars, people, trains,
etc.) or by electrical equipment (power lines, computers, various machinery, etc.).
Environmental noise reduction by shielding, active noise compensation, synthetic
gradiometers, adaptive methods, and spatial filtering. Enclosing the MEG system within a
shielded enclosure (shielded room) is the most straightforward method for reduction of
environmental noise.
3. EEG
Electric potentials (EEG) and magnetic fields (MEG) are related because they
both detect the same current generators. While radial magnetic fields are generated
mostly by the intracellular current, the EEG measures volume currents. The EEG and
MEG must be measured simultaneously to take advantage of the complementary
59
information. EEG electrodes and all their connections must be nonmagnetic to avoid
creation of MEG artifacts.
4. DATA INTERPRETATION
Much of the signal analysis used for MEG has been inherited from EEG
applications. However, MEG is more commonly used for quantitative assessment of
brain activity, especially for source localization. Electrophysiological activity is
characterized by a primary ionic current, flowing within cell bodies (the “source
current”), and a volume or return current, flowing in the extracellular space. Biomagnetic
sensors are coupled mainly to the primary current sources; biomagnetic measurements
can be configured so that there is little contribution from volume currents.
(b) A subject with attached EEG electrodes before head insertion into the MEG helmet.
60
4. Averaged evoked response: The averaged MEG signal—synchronous with an external
stimulus or voluntary motor event.
5. Topographic mapping of signal and power: Distribution of band-limited signal power,
mapped to the sensor surface.
6. Forward and inverse solutions: Computation of anal fields from a current source model,
with adjustment of model parameters for best fit to the observed field pattern. Models
include single and multiple equivalent current dipoles and continuous current
distributions (minimum norm).
7. Spatial filters: Weighted linear combinations of measurements that separate signals by
their spatial origin.
8. Three-dimensional mapping of source power: Estimation of source power or a statistical
derivative. Not to be confused with inverse solution.
CONCLUSIONS
The MEG signals originate in the brain neurons. Activation of the individual neurons is
not detectable and only the collective activations of large number of neurons are detected by the
primary SQUID sensors. In addition to the brain signals, the SQUID sensors are also exposed to
the environmental and body noise. To eliminate the environmental noise, references sensors,
positioned farther from the scalp, and are often used. The reference signals are subtracted from
the primary sensor outputs to reduce the detected noise; the process can be understood as spatial
high pass filtering. The SQUID design and optimization of the primary sensor flux transformers.
61
After the noise reduction, the detected signals are processed to the required bandwidth and the
data are acquired. The data processing and acquisition by the digital SQUID electronics. The
acquired data represent magnetic field on the scalp surface and must be interpreted to yield
information about the brain sources. This process requires additional information about the
anatomical structure, forward models of the brain sources, and methods for source estimation
from the measured fields. The brain magnetic fields were generated by a specific distribution of
the neuronal currents. After the measurement, processing, and interpretation, a smoothed
estimate of the neuronal activity is obtained, as shown in the lower right side of the figure.
62
Attentional Shifting
Ability to produce the right states associated with focus and attention
Poor concentration: lack of sufficient levels of SMR
Attentional deficits: excessive amounts of slow brain wave activity (Theta waves)
Neurofeedback:
Electroencephalograph (EEG) recording system and training software trains an individual to
concentrate while receiving visual and auditory feedback from a computer.
Procedure
1. To assess the neurological status of the patient and to determine to what extent there is a
neurological basis of the patient’s complaints
2. To identify possible strengths and weaknesses in the organization and
electrophysiological status of the patient’s brain so as to aid in the efficient and optimal
design of Neurotherapy
3. To increase efficiency and to objectively evaluate the efficacy of treatment by comparing
the patient’s EEG before, during and after treatment.
4. After initial interview: the first EEG training session (two hours)
5. Sometimes a full brain map, or quantitative EEG (QEEG) is obtained
6. The first six sessions are completed as quickly as possible and then the frequency of
training reduces to two or three times per week.
7. 30-40 sessions (depending on the severity of the disorder and other comorbid symptoms
present)
8. Approximately 30-45 minutes for each session (approximately 4-6 months)
9. Electrodes are placed on the scalp and to the earlobes
10. Series of tasks (reading, listening to stories, etc.) are presented
11. EEG waves are recorded as a spectrum of frequencies
12. Rewarded by changes in the game when certain level of beta wave activity is produced
13. Changes on the screen occur milliseconds after they occur in the brain, computer tones
are then heard to signal the change the moment goal is achieved.
Outcomes
Possibility of improvement in capability, rather than simply adjustment and coping
strategies
Some improvement is generally seen within ten sessions and permanent in most cases.
Children show no resistance to biofeedback.
In between 40 and 60 sessions, the individual is able to produce more SMR at will
Improvements in behaviour (control of temper tantrums, violence, cruelty)
Recovery of "affect", or emotional responsiveness (depression)
No known adverse side effects of the training, provided that it is conducted under
professional guidance
Increased openness to change and responsiveness to psychotherapy
63
Limitations
Cost of treatment (typically, $3000 and up), many medical and psychological insurance
plans cover biofeedback for various conditions
Performed only by a qualified practitioner in a clinical setting
There is only a small number of EEG normative reference databases adequate to meet the
minimal standards necessary for responsible and ethical uses of a NDB in the field of
EEG Biofeedback. Improvements are expected in the future.
Not as quick acting as medications
UNIT 3
ELECTROMYOGRAPHIC BIOFEEDBACK
Purpose
To measure, process, and feedback biophysical information
Biofeedback does not monitor the actual response itself
It monitors conditions associated with the response
Types of Biofeedback Units
Electromyographic
Measures electrical activity in skeletal muscle
Peripheral Temperature
Measures temperature changes in distal extremities
Increased temperature indicates a relaxed state
Decreased temperature indicates stress, fear, or anxiety
Photophlethysmography
Measures the amount of light reflected by subcutaneous tissue based on the
amount of blood flow
Galvanic skin response
Measures electrical resistance in the skin
Moist skin conducts a current better than dry skin
EMG
Detects the amount of electrical activity associated with a muscle contraction
converts it to visual and/or auditory feedback
promotes strength of the muscular contraction or facilitates relaxation
Can be used to create a game-like, competitive atmosphere to motivate rehabilitation
Biophysical Processes and Electrical Integration
64
EMG biofeedback measurements vary between brands
Electrical activity within the muscle increases as more motor units are recruited
These signals are picked up by electrodes, amplified, and converted into visual or
auditory signals
Electrode placement, superficial vs. deep muscles, electromagnetic noise, and tissue
variability cause variability in the signals produced
The Process
1. Identify Signal
Get the EMG signal from the body.
2. Amplify Signal
4. Integrate signal
Filter out background noise. Similar to a
Group the data into meaningful clusters.
volume control on a radio, enhance the
strength of the signal to meaningful level.
3. Rectify signal
Make all the values positive.
Indications
Facilitate muscle contractions
Regain neuromuscular control
Decrease muscle spasm
Promote relaxation
65
Neuromuscular Effects
The cognitive process attempts to inhibit pathways for relaxation
The goal is to decrease the number of motor impulses being relayed to the muscle in
spasm
Pain Reduction
Purpose: restore normal function of the body part
Facilitating reduction of muscle spasm reduces the amount of mechanical pressure placed
on nociceptors
Contraindications
General Rule: If the patient is prohibited from moving the joint or isometric contractions,
then EMG should NOT be used
Unhealed tendon grafts
Avulsed tendons
Third degree tears of muscle fibers
Unstable fracture
Injury to joint structure, ligaments, capsule, or articulating surface
Clinical Application
Biofeedback units vary greatly
Consult the user’s manual for specific instruction
66
FATIGUE CHARACTERISTIC
INTRODUCTION
The sleep debt that accumulates until paid off with adequate sleep.
Fatigue largely results from an inadequate quantity or quality of sleep. The quality of sleep is
also important to maintain your normal alertness and performance.
SLEEP DEBT
If you don’t get enough sleep (quality or quantity) over a series of nights, you’ll build up a sleep
debt. Losing an hour or two of sleep a day for several days can leave you as fatigued as missing
an entire night’s sleep. Many people sleep an extra hour or two on their day off – they’re paying
off their accumulated sleep debt.
A sleep debt can only be repaid with adequate recovery sleep – the sleep your body normally
needs to function.
Feelings of fatigue can also be brought on or made worse by conditions in your workplace, such
as:
67
• high-pressure demands,
• long shifts,
• stress,
Body rhythms
Your body clock – also called your circadian rhythms – programs you to sleep at night and be
awake during the day. It can be difficult to get good quality sleep during the day when your body
wants to be awake.
Work schedule
When you work and how much time you have between shifts affect how much opportunity you
have to sleep. Working through the night, long shifts, many shifts in a row, and short turnaround
reduce the time you have for sleep and increase the likelihood you’ll become fatigued.
Type of task
Some tasks are more fatiguing than others – complex, demanding tasks and boring, mundane
tasks increase feelings of fatigue.
Work environment
Loud noise, poor lighting, heat or cold, vibration, or humidity increase feelings of fatigue.
Balancing shiftwork with family and social life can be stressful and make it hard to get adequate
sleep. Family demands (e.g., illness) or personal problems (e.g., divorce) increase stress and the
likelihood of becoming fatigued.
That’s one reason your body finds it difficult to adjust to night or evening shifts – you’re
working when your body clock is trying to send you to sleep.
68
• lowering your core temperature, which also makes you sleepy.
When you work at night, you’re also fighting against other body rhythms, such
as digestion. Your body’s digestive system slows down when you’re normally sleeping, so eating
at night forces your body to digest food it’s not ready for.
This is why shiftworkers are more likely to experience fatigue and gastrointestinal problems.
The figure shows core body temperature across a 24-hour period (from 6 a.m.
to 6 a.m.).
Alertness follows a similar curve – as body temperature rises, you become more alert and
performance improves. As your temperature falls in the evening, you feel sleepier.
The lowest point of the temperature curve occurs between 3 a.m. and 5 a.m., which is a
particularly difficult time to stay awake.
Feeling sleepy after lunch – known as the post-lunch dip – is also part of the body’s normal
rhythms. It has nothing to do with whether you had a big lunch.
SLEEP CYCLE
Stage 1 is the transition between consciousness and sleep. You can generally hear and respond to
someone.
Stage 2 is a light sleep. You are easily awakened but you’re not aware of your surroundings.
Stage 5 is known as REM or rapid eye movement sleep, and it’s the stage of sleep where you
dream. Researchers believe your eyes move at this stage of sleep because you’re scanning the
images in your dreams. It’s thought to be important for learning and consolidation of memory.
A typical sleep will move through the cycle several times, but each cycle will vary in length.
Whenever you’re sleep deprived, your body will try first to catch up on deep sleep (Stages 3 and
4) and REM sleep.
SAFETY HAZARD
69
Fatigue and falling asleep have been identified as significant contributors to incidents and
accidents.
It has been estimated that between 10 and 40% of all road accidents involve fatigue.
There are particular times of the day when the risks associated with fatigue are higher:
• midnight to 6 a.m. (and especially 3 a.m. to 5 a.m.) – the low point in the
body’s circadian rhythm that governs alertness and performance
• the beginning and end of shift when handover occurs – fatigue levels can
affect communication
• when you work without a breaks for a number of hours – the longer you’re
on the job, the likelier you are to have accumulated fatigue
In general, we are poor judges of our own fatigue. It’s difficult to tell when your fatigue levels
have reached a point where it’s no longer safe to work.
NERVE STIMULATORS
INTRODUCTION
Electrical nerve stimulation is widely used for nerve localization during peripheral nerve
blockade. An accurate constant current stimulator is necessary for reliable results. Electrical
impulses excite nerves by inducing a flow of ions through the neuronal cell membrane, with
subsequent action potential generation. The nerve membrane depolarization results in either
muscle contraction or paresthesia, depending on the type of stimulated nerve fiber (motor or
sensory), which is consistent with the nerve’s distribution.The characteristics of the electrical
impulse will determine its ability to stimulate a nerve, and the quality of stimulation will be
affected by the polarity and type of electrode, the needle–nerve distance, and by potential
interactions at the tissue–needle interface.
Theoretically, a painless motor response can be produced using a low current with a short pulse
width, as motor nerves are the main effectors. Conversely, the higher the current, the less
preferential the stimulation is for motor nerves. Recent literature suggests other factors may also
70
contribute to pain during peripheral nerve block, including withdrawal and repositioning of
needles and strength of muscle contraction
• With applications of square current pulses, the total charge (Q) applied to a nerve equals the
product of the current intensity (I) and the duration (t): Q = I × t
Stimulation curve plotting current intensity and pulse duration. Preferential cathodal
stimulation.
• Regardless of stimulus intensity, a rate of current change that is too low will reduce nerve
excitability.
• Long subthreshold intensity or slowly increasing rates will inactivate sodium conductance and
prevent depolarization; this is termed accommodation.
Direct electrical current fl owing through two electrodes on a given nerve will stimulate the
nerve at the cathode (negative electrode) and resist excitation at the anode (positive electrode).
• Negative current from the cathode reduces voltage outside the neuronal cell membrane, causing
depolarization and an action potential; the anode injects positive current outside the membrane,
leading to hyperpolarization.
• Preferential cathodal stimulation (Figure 2.2) refers to the signifi cantly reduced (one third to
one quarter) current that is required to elicit a motor response when the cathode is used as the
stimulating electrode.
Distance–Current Relationship
71
Generally, as the distance increases between the nerve and the stimulating electrode, a higher
stimulus current is required. Because the current varies with the inverse of the square of the
distance, a much larger stimulating current will be required as one move away from the nerve .A
shorter pulse width requires more current to stimulate the nerves at greater distances, but is a
better discriminator of nerve–needle distance.
Distance–current curve.
Current intensity at different pulse widths.
At the needle tip, the conductive area for current fl ow will modify the current density and
response threshold. Small conductive areas will condense the current and reduce the threshold
current for motor responses. The needle/catheter–tissue interface can affect the density as the
area of conductance can change with changing injectates (e.g., ion conductance variation) or
tissue composition. The needle is an extension of the stimulating electrode.
Conducting and nonconducting solutions vary signifi cantly in their effect on the current at the
needle/catheter tip.
Electrodes
• Types of electrodes include insulated and noninsulated needles and stimulating catheters.
• Insulated needles have nonconducting shafts (e.g., Tefl on) that direct the current density to a
sphere around the uncoated needle tip (i.e., small conducting area allowing low threshold current
stimulation).
The threshold current is minimal when the needle tip contacts the nerve and is approximately 0.5
to 0.7 mA with a pulse width of 100 μs when nerves are 2 to 5 mm away.
• Stimulating catheters are similar to insulated needles, except for the requirement of a much
higher threshold current with the use of saline for determining correct placement and/or dilating
the perineural space.
72
• Noninsulated needles are bare metal and transmit current throughout their entire shaft; the
current density at the tip is therefore much lower than with insulated needles.
Often more than 1 mA is required for nerve stimulation with noninsulated needles.
Injectates
• During nerve stimulation, the traditional test (Raj test) used for nerve localization includes a
test injection of local anesthetic or normal saline, which abolishes the muscle twitch response.
• This effect was previously thought to result from the force of the fluid causing nerve
displacement away from the needle tip. It is now known to be due to the conduction properties of
these solutions.
There are many different makes and models of nerve stimulators on the market and
anesthesiologists should familiarize themselves with the equipment available in their own
institution.
• Most modern nerve stimulators are now produced to utilize constant current rather than the
traditional voltage systems; this allows the current to remain the same regardless of resistance
variation.
Most machines can be adjustable in frequency, pulse width, and current strength (milliamperes).
• Clear digital displays (monitors) show the current delivered to the patient and the target current
setting.
Some stimulators have low (<6 mA) and high (<80 mA) output ranges for increased accuracy
during localization of peripheral nerves and monitoring neuromuscular blockade, respectively.
• Note that the amplitude of currents required for epidural stimulation are much higher (1–17
mA) than the low output range of some peripheral nerve stimulators, therefore, most stimulators
used solely for peripheral nerve blockade will not be suitable for this neuraxial application.
73
• Pulse width (i.e., duration of pulse) determines the amount of charge delivered and enables
selective stimulation of different nerve fi bers (Figure 2.1).
For instance, sensory fi bers are more effectively stimulated with longer pulse widths (400 μs)
than motor nerves (50–150 μs).
Some devices allow width ranges from 50 μs to 1 ms for high variation and selectivity depending
on the specifi c nerve block location.
A recent study (Urmey and Grossi, 2006) suggests that utilizing pulse width variation (rather
than constant width as commonly practiced) through sequential electric nerve stimuli (SENS)
can increase sensitivity without compromising specifi city of nerve location.
• Indicators displaying the status of battery power as well as those warning of incomplete
circuitry or pulse delivery failure are essential components of the machinery.
Other Accessories
• Probes (commercially available) for the performance of surface nerve mapping during
percutaneous electrode guidance procedures
Introduction
A nerve conduction velocity test (NCV) is an electrical test that is used to determine the
adequacy of the conduction of the nerve impulse as it courses down a nerve. This test is used to
detect signs of nerve injury. In this test, the nerve is electrically stimulated, and the impulse is
measured. This is usually done with surface electrodes that are placed on the skin over the nerve
at various locations. The distance between electrodes and the time it takes for electrical impulses
to travel between electrodes are used to calculate the speed (velocity) of impulse transmission. A
decreased speed of transmission indicates nerve disease.
Symptoms that might prompt a health care professional to order an NCV test include numbness,
tingling, and/or burning sensations. The NCV test can be used to detect true nerve disorders
74
(such as diabetic neuropathy) or conditions whereby nerves are affected by mechanical
compression injury (such as carpal tunnel syndrome and diseases of the spine).
Follow-up
Often NCVs are used to follow the progression of the neurological disease. For example, with
Diabetic Peripheral Neuropathy (DPN), the recommendation of the American Diabetes
Association is for patients to be screened annually, at minimum, to follow the progression of the
diseased nerves and the effectiveness of treatment.
Conduction Velocity
• It is calculated by dividing the change in distance (between proximal stimulation site &
distal stimulation site in mm) by the change in time (proximal latency in ms minus distal
latency in ms)
• Normal values are > 50 meters/sec in the upper limbs and > 40 meters/sec in the lower
limbs.
75
Sensory, motor or mixed nerves can be studied. Pairs of electrodes are used –one to initiate the
impulse and the other to record the response further along the path of the nerve (distally within
the innervated muscle for motor nerves or proximally along sensory nerves). For motor nerves, a
depolarizing square wave current is applied to the peripheral nerve to produce a compound
muscle action potential (CMAP) due to summation of the activated muscle fibers. In sensory
nerves, a propagated sensory nerve action potential (SNAP) is created in a similar manner.
During your NCV test, the electrical impulses may feel like little electric shocks. The good news
is that these sensations only last as long as the impulses themselves. Once the test is over, there
will not be any lasting discomfort.
70 - 120 12 - 20
Golgi Tendon Organ Ib Aa Muscle tension
m/sec mM
6 - 12
Joint: Pacinian II Ab 30 - 70 m/sec Joint movement
mM
6 - 12
Joint: Ruffini II Ab 30 - 70 m/sec Joint angle
mM
76
mM movement
6 - 12
Pacinian corpuscle II Ab 30 - 70 m/sec Vibration
mM
6 - 12
Ruffini corpuscle II Ab 30 - 70 m/sec Skin stretch
mM
Ab &
Hair follicle II & III 10 - 70 m/sec 2 -12 mM Touch movement
Ad
6 - 12
Merkel complex II Ab 30 - 70 m/sec Fine touch
mM
77
NCV Testing
Because the NCV test uses electrodes on the skin, you don’t need to do much to prepare for it.
You should wait to apply any lotions or creams to the area being tested until after the procedure.
Also, if you have a pacemaker or cardiac defibrillator, make sure to tell your practitioner so they
can take the necessary precautions before the NCV test begins.
May vary from one individual to another and from one nerve to another.
78
• Our muscles contract when the myosin
heads binds to actin (crossbridge formation)
causing it to slide overtop of itself.
• The length of the thin and the thick
filaments have not changed.
• Rather, the myofilaments slide over top of
each other.
• Hence the name “Sliding Filament
The Sliding Filament Theory
Theory”
Two Problems:
o The binding sites on actin are blocked by tropomyosin!!
o Myosin needs energy to bind and move the actin.
How do we “Un-block” Actin?
• Calcium Ions (Ca2+) bind to Troponin causing tropomyosin to move and reveal the binding
sites.
o Calcium is a regulatory molecule for muscular contraction.
How does a muscle know when to release calcium?
• Calcium is released when the cell becomes depolarized.
o A resting muscle cell is “polarized”
o When an action potential from the motor neuron arrives, the cell becomes depolarized (due to
acetylcholine).
o This wave of depolarization is transported to the interior of the muscle fibre via the transverse
tubules (T-Tubules).
Where do the
calcium ions come from?
79
• Calcium ions (Ca2+) come from the Sarcoplasmic Reticulum when the cell becomes
depolarized.
80
What Happens When We Relax Our Muscles?
• When the muscles relax:
o Ca2+ ions return to the sarcoplasmic reticulum.
o Tropomyosin slides back over the binding sites on Actin breaking the crossbridges
o The Actin filaments slide back to their original position.
Sliding Filament Theory: Overview
1) Brain releases a Nerve Impulse to initiate a movement
2) Nerve Impulse travels down the neuron to the neuromuscular junction (Axon Terminal)
3) The axon terminal releases the neurotransmitter acetylcholine
4) Acetylcholine crosses the synaptic cleft and binds to the receptors
on the sarcolemma
5) The sarcolemma becomes depolarized
6) The action potential is transported to the interior of the muscle via
the transverse tubules
7) The sarcoplasmic reticulum releases calcium ions
8) Calcium binds to troponin
9) Tropomyosin slides revealing myosin binding sites on Actin
10) ATP attaches to the head of myosin
PEDOBAROGRAPHY
Pedobarography is the study of pressure fields acting between the plantar surface of the foot and
a supporting surface. Used most often for biomechanical analysis of gait and posture,
pedobarography is employed in a wide range of applications including sports
biomechanics and gait biometrics . The term 'pedobarography' is derived from the Latin: pedes,
referring to the foot (as in: pedometer, pedestrian, etc.), and the Greek: baros meaning 'weight'
and also 'pressure' (as in: barometer, barograph).
HISTORY
The first documented pedobarographic study was published in 1882 and used rubber and ink to
record foot pressures.[1] Numerous studies using similar apparatus were conducted in the early-
and mid-twentieth century,[1][2] but it was not until the advent of the personal computer that
electronic apparatus were developed and that pedobarography became practical for routine
clinical use.[3] It is now used widely to assess and correct a variety of biomechanical and
neuropathic disorders.[4][5]
81
Example floor-based foot pressure measurement device.
Hardware
Devices fall into two main categories: (i) floor-based, and (ii) in-shoe. The underlying
technology is diverse, ranging from piezoelectric sensor arrays to light
[2] [4] [6] [7] [8]
refraction, but the ultimate form of the data generated by all modern technologies is
either a 2D image or a 2D image time series of the pressures acting under the plantar surface of
the foot. From these data other variables may be calculated (see Data Analysis).
There are a few differences between the types of information you will received from these two
systems, so depending on the application one system might be a better fit. For example, a floor-
based system will provide spatial temporal information, like stride length that an in-shoe system
cannot provide. Platform systems (or floor-based systems) will also allow for testing of patients
with walking aids for assistive devices. However, there is some controversy about evaluating
natural gait with a platform system due to patients potentially targeting the platform when
walking. This is where an in-shoe system provides an advantage as it reduces the risk of
targeting. Users should evaluate carefully the differences between the systems, the patients they
will be evaluating and the type of data they are interested in when selecting a system. [9]
82
The spatial and temporal resolutions of the images generated by commercial pedobarographic
systems range from approximately 3 to 10 mm and 25 to 500 Hz, respectively. Finer resolution is
limited by sensor technology. Such resolutions yield a contact area of approximately 500 sensors
(for a typical adult human foot with surface area of approximately 100 cm2).[10] For a stance
phase duration of approximately 0.6 seconds during normal walking, [11] approximately 150,000
pressure values, depending on the hardware specifications, are recorded for each step.
Data analysis
To deal with the large volume of data contained in each pedobarographic record, traditional
analyses reduce the data to a more manageable size in three stages: (1) Produce anatomical or
regional masks, (2) Extract regional data, and (3) Run statistical tests. Results are typically
reported in tabular or bar graph formats. There are also a number of alternative analysis
techniques derived from digital image processing methodology.[12][13][14] These techniques have
also been found to be clinically and biomechanically useful, but traditional regional analyses are
most common.
The most commonly analyzed pedobarographic variable is 'peak pressure', or the maximum
pressure experienced at each sensor (or pixel, if the sensors fall on a regular square grid) over the
duration of the step. Other variables like contact duration, pressure-time integral, center of
pressure trajectory, for example, are also relevant to the biomechanical function of the foot.
Clinical use
The most widely researched clinical application of pedobarography is diabetic
foot ulceration,[15] a condition which can lead to amputation in extreme cases[16] but for which
even mild-to-moderate cases are associated with substantial health
care expenditure.[17] Pedobarography is also used in a variety of other clinical situations
including: post-surgery biomechanical assessment,[18] intra-operative assessment,[19] orthotics
design[20] and assessment of drop-foot surgery.[5] In addition to clinical applications,
pedobarography continues to be used in the laboratory to understand the mechanisms governing
human gait and posture.[3][7]
The use of pedobarographs in clinical settings is supported by researchers. According to Bowen,
et al., "Pediobarograph measurements can be used to monitor and quantitatively assess the
progressive changes of foot deformity over time. Pedobarograph is a reliable measurement that
shows little variability between measurements at the same occasion and between measurements
on different days."[21]
Terminology[
UNIT 4
WHOLE-BODY PLETHYSMOGRAPHY(Unit-4)
83
Introduction
There are various ways of measuring absolute lung volume. These range from
measurements derived from chest radiographs to the more common laboratory measurements
employing either gas dilution or plethysmography. The principal difference between the latter
two methods is that gas dilution techniques measure gas that is in free communication with the
airway opening, whilst plethysmography measures all intrathoracic gas. Once the absolute lung
volume is known, the other lung volumes can be measured from the change in volume during
specific respiratory maneuvers.
For the purpose of this review, functional residual capacity (FRC) is defined as the
absolute volume of gas in the lung at the end of a normal expiration. Thoracic gas volume (TGV)
is defined as the volume of intra- thoracic gas at the time the airway is occluded for the
plethysmographic measurement; while this is usually at FRC, in special circumstances it may not
be [1, 2]. Irrespective of where in the volume cycle TGV is measured, it should be adjusted to
the FRC derived from plethysmography (FRC) by subtracting or adding the appropriate volume
correction. In normal children and adults, there should be no difference in FRC between gas
dilution techniques and plethysmography. However, in patients with lung disease associated with
gas-trapping and in normal infants [3], FRC generally exceeds FRC measured by gas dilution.
Types of plethysmography
84
Volume displacement plethysmograph
The modern volume displacement plethysmography described by is a rigid chamber,
300–600 L in volume (fig. 2). Part of the chamber opens directly into the base of a spirometer
with low inertia, usually a Krogh type instrument connected to a rotational or linear displacement
transducer. With the subject breathing from outside the chamber, the spirometer will measure
large changes in lung volume, forced vital capacity
(FRC) maneuvers, etc. When the airway is occluded and the subject pants, small changes in
volume due to thoracic gas compression are also accurately measured. With proper balancing
(such as the use of springs support the bellows of the spirometer and a low resistance pivoting
device) the weight may be compensated and resistance of the system may be negligible, but the
mass of the spirometer is significant, so inertia becomes a major determining factor in the
frequency response. During rapid changes in volume, the spirometer is unable to follow the
volume changes, which lead to compression of the air in the plethysmograph as the subject
breathes in and rarefaction as the subject breathes out. During the transition between the two, the
inertia of the bellows leads to oscillations or overshoot of the volume signal. Placing felt-
padding between the chamber and the spirometer dampens the signal, reducing overshoot and
oscillations. The degree of damping should be tested by the introduction of a step function (rapid
injection into the plethysmograph of a known quantity of air using a syringe). In an underdamped
system ( i.e no felt padding) there will be an overshoot of the volume signal followed by
oscillations of diminishing magnitude until a stable volume is reached. In an overdamped system
(i.e. too much felt-padding) there will be a slow rise of the volume signal to its final value.
85
Flow plethysmographs
In theory, the flow plethysmograph should be an ideal compromise between the variable
pressure and volume displacement plethysmographs (fig. 4). Absolute rigidity of the walls is not
necessary, problems with thermal time constants are minimized, and the frequency response,
after pressure compensation, should be close to that of a variable pressure plethysmograph.
Changes in volume of the lungs are measured by integrating the gas flow in and out of the
chamber as measured by the differential pressure across either a capillary-type
pneumotachograph or a wire mesh screen (25 μm mesh) mounted on the wall of the
plethysmograph. The latter is almost a pure resistance; the former has both resistance and
inertance. The sensitivity of the screen-type pneumotachograph to low flows can be increased by
adding several layers of low resistance screen but this also increases resistance and, hence, the
time constant, thereby reducing the frequency response. A 25 μm mesh screen with a 14 cm
diameter can be expected to have a linear range up to 15 L·s-1 and a resistance in the order of
0.003 kPa·L-1·s per layer of screen. This allows maximal flow to be measured during forced
expiration. For measurements of TGV and airway resistance, an 8 cm diameter screen with a
resistance in the order of 0.01 kPa·L
86
would eliminate errors in flow-volume curves associated with the measurement of volume at the
airway opening rather than the actual lung volume during the maneuver
SPIROMETRY (Unit-4)
Definition
Spirometry is a method of assessing lung function by measuring the volume of air a
patient can expel from the lungs after maximal inspiration
A spirometer is an apparatus for measuring the volume of air inspired and expired by the lungs.
A spirometer measures ventilation, the movement of air into and out of the lungs. The spirogram
will identify two different types of abnormal ventilation patterns, obstructive and restrictive.
There are various types of spirometers which use a number of different methods for
measurement (pressure transducers, ultrasonic, water gauge).
Diagnose certain types of lung disease (such as asthma, bronchitis, and emphysema)
Find the cause of shortness of breath
Measure whether exposure to chemicals at work affects lung function
Check lung function before someone has surgery
Assess the effect of medication
Measure progress in disease treatment
87
Electronic spirometers have been developed that compute airflow rates in a channel without the
need for fine meshes or moving parts. They operate by measuring the speed of the airflow with
techniques such as ultrasonic transducers, or by measuring pressure difference in the channel.
These spirometers have greater accuracy by eliminating the momentum and resistance errors
associated with moving parts such as windmills or flow valves for flow measurement. They also
allow improved hygiene between patients by allowing fully disposable air flow channels.
Incentive spirometer
This spirometer is specially designed to improve one's functioning of the lungs.
Peak flow meter
This device is useful for measuring the ability of a person breathing out air.
Windmill-type spirometer
Used specially for measuring forced vital capacity without using water and has broad
measurements ranging from 1000 ml to 7000 ml. It is more portable and lighter as compared to
traditional water-tank type spirometer. This spirometer should be held horizontally while taking
measurements because of the presence of rotating disc.
Tilt-compensated spirometer
Tilt-compensated type spirometer also known as the AME Spirometer EVOLVE. This new
spirometer can be held horizontally while taking measurements but should the patient lean too
far forward or backwards the spirometer's 3D-tilt sensing compensates and indicates the patient
position.
WHY WE DO IT!
Diagnosis confirmation
• COPD classification
• Disease progression
• Response to treatment
• Health Promotion (Smoking Cessation)
• Targets
When not to perform spirometry
• Inadequate training
• Inadequate equipment
• Contra-indications
88
• Unstable cardiac status
• Aneurysm
• Recent eye surgery
• Recent thoracic or abdominal surgery
• Acute disorders: D&V, Exacerbations
How we do it!
• Equipment / spirometers /syringes
• Cleaning
• Temperature
• Calibration/Verification checks
Patient preparation
• Pre-test information
• Documentation
• Patient comfort
• Explanation/demonstration
Time in seconds
FVC
The maximum volume of air exhaled as rapidly, forcefully and completely from
maximum inspiration
89
Relaxed Vital Capacity
The maximum volume of air expelled during a relaxed exhalation from maximum
inspiration
• Obstructed
• Restricted
• Combined/Mixed
90
Restrictive Spirometry
Restrictive: due to conditions in which the lung volume is reduced, eg fibrosing
alveolitis, scoliosis. The FVC and FEV1 are reduced proportionately.
Reporting Spirometry
• Results should be the greatest values achieved from 3 technically acceptable blows. (FEV1
within 5%)
• Poorly performed spirometry is worse than no spirometry!
Terms
Normal Obstructive Restrictive Combined
FVC
>80% Normal Reduced Reduced
>80%
FEV1 Reduced Reduced Reduced
91
Tidal Volume Quantity of air moved into and out of the lungs
during a normal breath
(TV) 500ml
Minute Respiratory Volume Quantity of air moved into and out of the lungs in
one minute
(MRV) 6000
(MRV= TV X Respiratory Rate) ml/min
Total Lung Capacity Maximum quantity of air the lungs can hold
92
Measurement Value Calculation Description
The amount of air that can be forced out of the lungs after a
Vital capacity = 4.6 = IRV + TV + maximal inspiration. Emphasis on completeness of expiration.
(VC) L ERV The maximum volume of air that can be voluntarily moved in
and out of the respiratory system. [2]
Forced vital = 4.8 The amount of air that can be maximally forced out of the
Measured
capacity (FVC) L lungs after a maximal inspiration. Emphasis on speed. [3][4]
Expiratory The amount of additional air that can be breathed out after the
= 1.2 Measured
reserve volume end expiratory level of normal breathing. (At the end of a
93
(ERV) L normal breath, the lungs contain the residual volume plus the
expiratory reserve volume, or around 2.4 litres. If one then
goes on and exhales as much as possible, only the residual
volume of 1.2 litres remains).
Inspiratory measured The additional air that can be inhaled after a normal tidal
= 3.6
reserve volume IRV=VC- breath in. The maximum volume of air that can be inspired in
L
(IRV) (TV+ERV) addition to the tidal volume.
Functional
= 2.4 The amount of air left in the lungs after a tidal breath out. The
residual = ERV + RV
L amount of air that stays in the lungs during normal breathing.
capacity (FRC)
Inspiratory = 4.1
= TV + IRV The volume that can be inhaled after a tidal breathe-out.
capacity (IC) L
Anatomical = 150 The volume of the conducting airways. Measured with Fowler
measured
dead space mL method.[5]
Physiologic = 155
The anatomic dead space plus the alveolar dead space.
dead volume mL
PNEUMOTACHOMETER:
Lung volumes and capacities are anatomic measurements that vary with age, weight, height and
sex of an individual. When affected by disease or trauma, the lung volumes and capacities are
altered to a certain degree, depending upon the severity of the disorder. Pulmonary tests can
show the effects of disease on function, but they cannot be used to give a diagnosis. However
these tests do give valuable quantitative data, allowing the progress of a disease to be followed,
or the response to a treatment examined.
94
Lung volumes that are not affected by the rate of air movement in and out of the lungs are
termed static lung volumes. The following five static lung volumes can be measured: VT (tidal
volume), IRV (inspiratory reserve volume), ERV (expiratory reserve volume), IC
(inspiratory capacity) and VC (vital capacity).
Lung Volumes that depends upon the rate at which air flows out of the lungs are termed dynamic
lung volume. There are various dynamic tests: Forced Vital Capacity test and Maximum
Voluntary Ventilation Test.
The Forced Vital Capacity (FVC) is the volume of gas that can be exhaled as forcefully and
rapidly as possible after a maximal inspiration. Normally FVC = VC, however in certain
pulmonary diseases (characterized by increased airway resistance), FVC is reduced.
From the FVC test, we can also determine the Forced Expiratory Volume in 1 sec (FEV1), which
is the maximum volume of air that can be exhaled in a 1 sec time period. Normally the
percentage of the FVC that can be exhaled during 1 sec is around 80% (i.e. FEV1/FVC=80%).
Maximum Voluntary Ventilation (MVV) is the largest volume of air that can be breathed in
and out of the lungs in 1 minute. It will be reduced in pulmonary diseases due to increases in
airway resistance or changes in compliance.
In a restrictive lung disease, the compliance of the lung is reduced, which increases the
stiffness of the lung and limits expansion. In these cases, a greater pressure ( P) than normal is
95
required to give the same increase in volume ( V). Common causes of decreased lung
compliance are pulmonary fibrosis, pneumonia and pulmonary edema
The FVC test allows one to clearly distinguish between the two disease types. Notice in the
obstructed lung (below left), how FVC is smaller than normal, but also that
FEV1 is much smaller than normal. This is because it is very difficult for a person with an
obstructive disease (eg. asthma) to exhale quickly due to the increase in airway resistance. As a
result, the FEV1/FVC ratio will be much lower than normal, for example 40% as opposed to
80%.
In the restricted lung, the FVC is again smaller than normal, but the FEV1 is relatively large in
comparison. i.e. the FEV1/FVC ratio can be higher than normal, for example 90% as opposed to
80%. This is because it is easy for a person with a restricted lung (e.g fibrosis) to breathe out
quickly, because of the high elastic recoil of the stiff lungs
Many important aspects of lung function can be determined by measuring airflow and the
corresponding changes in lung volume. Airflow can be measured directly with a
pneumotachometer and a transducer.
96
PNEUMOTACHOMETERS
Pneumotachometers are devices that measure the instantaneous rate of volume flow of respired
gases. Basically, there are two types of pneumotachometers,:
(i) Differential manometer—It has a small resistance, which allows flow but causes a
pressure drop. This change is measured by a differential pressure transducer, which
outputs a signal proportional to the flow according to the Poiseuille law, assuming
that the flow is laminar. The unit is heated to maintain it at 37°C to prevent
condensation of water vapour from the expired breath.
(ii) (ii) Hot–wire anemometer—It uses a small heated element in the pathway of the gas
flow. The current needed to maintain the element at a constant temperature is
measured and it increases proportionally to the gas flow that cools the element.
Pneumotachometer is commonly used to measure parameters pertaining to pulmonary
function such as forced expiratory volume (FEV), maximum mid-expiratory volume, peak
flow and to generate flow-volume loops. Although these devices directly measure only
volume flow, they can be employed to derive absolute volume changes of the lung
(spirometry) by electronically integrating the flow signal. Conventional mechanical
spirometers, though more accurate than pneumotachometers, have limitations due to their
mechanical inertia, hysteresis and CO2 buildup. Pneumotachometers, on the other hand, are
relatively non-obstructive to the patient and this makes them suitable for long-term
monitoring of patients with respiratory difficulties. A basic requirement of
pneumotachometers (PTM) is that they should present a minimum resistance to breathing. An
acceptable resistance would be between 0.5 and 1.0 cm H2O s/l. The pressure drop across the
flow head at peak flow is also indicative of PTM resistance. Fleisch PTMs normally have a
peak flow pressure drop of around 1.5 cm H2O. Normal respiratory phenomenon has
significant frequency components up to only 10 Hz and devices with this response should be
quite suitable for most applications. More often, it is not the frequency response but the
response time, which is generally specified. The response time of a typical ultrasonic
spirometer is 25 ms. The dead space volume of the flow head should be as small as possible.
A bias flow into the flow head is sometimes introduced to prevent rebreathing of expired air.
A good zero stability is a prerequisite of PTMs to prevent false integration during volume
measurements.
Fleisch-type pneumotachometers
97
pressure. To convert this pressure into an electrical signal, a second transducer is required. A
capacitance type pressure transducer is used in such applications. They are more stable and less
vibration-sensitive than resistive or inductive type transducers. At high flow rates, turbulence
develops in the hose leading to the pneumotach and its response tends to become non-linear. This
limits the usable range of the transducer.
The relationship between pressure drop and flow is given by DP = AV + BV2, where the term
BV2introduces the non-linearity effect. This non-linearity is generally corrected electronically.
According to Poisseuille’s law, the pressure developed across a pneumotach by laminar gas flow
is directly proportional to the gas viscosity. The viscosity (h) of a mixture of gases is
approximated by the equation.
where X1 is the fraction of gas having the viscosity h1. This necessitates the application of an
automatic correction factor to the flow rate for changes in viscosity. Hobbes (1967) studied the
effect of temperature on the performance of a Fleisch head and found that the output increased
by 1% for each degree C rise. He also noted that the effect of saturating air at 37°C with water
vapour was to reduce the output from the head by 1.2% as compared with dry air at the same
temperature. The calibration of a pneumotachograph head in terms of volume flow rate can be
done by passing known gas flows through it. The flow can be produced by a compressor and
measured with a rotameter type gas flowmeter. Most respiratory parameters are reported in
BTPS conditions (body temperature, ambient pressure, saturated with water vapour). This is the
condition of air in the lungs and the mouth. To prevent condensation and maintain the gas under
these conditions, the temperature of the pneumotach is maintained at 37°C. The heater that
98
warms the pneumotach is electrically isolated from the metal case for patient safety, and is
encapsulated so that the entire unit may be immersed in liquid for sterilization. The thermistor
that senses the temperature and controls the heater through a proportional controller, is buried in
the metal case.
Venturi type :
This type works similarly to the Fleisch pneumotachometer, but have a venturi-throat for
the linear resistance element. The resulting pressure drop is proportional to the square of volume
flow. They have open geometry and therefore are less prone to problems of liquid collection.
Their main disadvantages are the non-linearity of calibration and the requirement for laminar
flow.
Turbine type:
In this design, air flowing through the transducer rotates a very low mass (0.02 g) turbine blade
mounted on jewel bearings. Rotation of the turbine blade interrupts the light beam of a
lightemitting diode (LED). The interrupted light beam falls on a phototransistor, which produces
a train of pulses, which are processed and accumulated to correspond to an accumulated volume
in litres.
A special feature of this transducer is a bias air flow, applied to the turbine blades from a
pump.This flow keeps the blades in constant motion even without the sample flow through it.
This allows measurement of sample air flow in the range of 3 to 600 l/min in the most linear
range of the volume transducer, by overcoming much of the rotational inertia of the turbine. The
‘ZERO’ control of the volume transducer adjusts the bias air flow to produce a train of clock
pulses of exactly the same frequency as those generated by the crystal oscillator. diagram of the
transducer.
99
Conditions affecting pneumotachometry
100
NEBULIZER (UNIT-4)
Definition
A nebulizer is a device that is used for breathing treatments. It changes liquid medicine
into a mist. The mist goes into your lungs when you inhale (breathe in).
How does a nebulizer work?
A nebulizer consists of a machine to power the nebulizer and a tube that connects the
machine to the medicine container. The medicine is changed into a mist in the medicine
container. The machine or container has a valve that can increase or decrease the amount of mist
you receive. You breathe in the mist through a mask or mouthpiece.
101
Prepare the medicine: If your medicine is premixed, open it and place it in the nebulizer
medicine container. If you have to mix medicines, place the right amounts into the
container using a dropper or syringe.
Add saline if needed: You may need to add saline (saltwater) to your medicine
container. Buy sterile normal saline at a drugstore. Do not use homemade saline solution
in a nebulizer.
Connect the container: Connect the medicine container to the machine.
Attach the mask or mouthpiece to the container:
o Adults and older children: Place the mouthpiece in your mouth. Breathe in and
out slowly through your mouth until all the medicine is gone.
o Infants and younger children: Place the mask on your child's face. You may
need to distract your child during the treatment to keep him from removing the
mask.
Start the treatment: Turn on the machine. Keep the medicine container in an upright
position. You may need to tap the sides of the container toward the end of the treatment.
This will help the last of the medicine become mist. The whole treatment may take 8 to
10 minutes. The treatment is over when all the medicine is gone or there is no more mist
coming out. The machine may also make a sputtering noise when treatment is done.
What are the advantages and disadvantages of a nebulizer?
Nebulizers can be used by anyone of any age. You can mix more than 1 medicine, and
they can all be given at the same time. High doses of medicines can be used. The
medicine is delivered as you breathe normally. No special breathing techniques are
needed to use a nebulizer.
The machine is noisy and needs an electrical power source for it to function. Compared to
other inhalation devices, it is larger, less portable, and has a longer treatment time.
102
LUNG INTRA-ALVEOLAR PRESSURE (UNIT-4)
Pulmonary plethysmographs are commonly used to measure the functional residual
capacity (FRC) of the lungs—the volume in the lungs when the muscles of respiration are
relaxed—and total lung capacity.
In a traditional plethysmograph, the test subject is placed inside a sealed chamber the size of a
small telephone booth with a single mouthpiece. At the end of normal expiration, the mouthpiece
is closed. The patient is then asked to make an inspiratory effort. As the patient tries to inhale (a
maneuver which looks and feels like panting), the lungs expand, decreasing pressure within the
lungs and increasing lung volume. This, in turn, increases the pressure within the box since it is a
closed system and the volume of the box compartment has decreased to accommodate the new
volume of the subject.
Boyle's Law is used to calculate the unknown volume within the lungs. First, the change in
volume of the chest is computed. The initial pressure and volume of the box are set equal to the
known pressure after expansion times the unknown new volume. Once the new volume is found,
the original volume minus the new volume is the change in volume in the box and also the
change in volume in the chest. With this information, Boyle's Law is used again to determine the
original volume of gas in the chest: the initial volume (unknown) times the initial pressure is
equal to the final volume times the final pressure.
The difference between full and empty lungs can be used to assess diseases and airway passage
restrictions. An obstructive disease will show increased FRC because some airways do not empty
normally, while a restrictive disease will show decreased FRC. Body plethysmography is
particularly appropriate for patients who have air spaces which do not communicate with the
bronchial tree; in such patients helium dilution would give an incorrectly low reading.
Another important parameter, which can be calculated with a body plethysmograph is the airway
resistance. During inhalation the chest expands, which increases the pressure within the box.
103
While observing the so-called resistance loop (cabin pressure and flow), diseases can easily be
recognized. If the resistance loop becomes planar, this shows a bad compliance of the lung.
A COPD, for instance, can easily be discovered because of the unique shape of the
corresponding resistance loop.
Whole-body plethysmography is used to measure respiratory parameters in conscious
unrestrained subjects, including quantification of bronchoconstriction.
The standard plethysmograph sizes are for the study of mice, rats and guinea pigs. On request,
larger plethysmographs can also be manufactured for other animals, such as rabbits, dogs, pigs,
or primates.
The plethysmograph has two chambers, each fitted with a pneumotachograph. The subject is
placed in one of them (subject chamber) and the other remains empty (reference chamber).
The pressure change is measured by a differential pressure transducer with one port exposed to
the subject chamber and the other to the reference chamber.
HUMIDIFIERS (Unit-4)
Introduction
Humidifiers can ease problems caused by dry air. But they need regular maintenance.
Here are tips to ensure your humidifier doesn't become a household health hazard. Dry sinuses,
bloody noses and cracked lips — humidifiers can help soothe these familiar problems caused by
dry indoor air. Humidifiers can also help ease symptoms of a cold or another respiratory
condition. But be cautious: Although useful, humidifiers can actually make you sick if they aren't
maintained properly or if humidity levels stay too high. If you use humidifiers, be sure to
monitor humidity levels and keep your humidifier clean. Dirty humidifiers can breed mold or
bacteria. If you have allergies or asthma, talk to your doctor before using a humidifier.
Description
Humidifiers are devices that emit water vapor or steam to increase moisture levels in the air
(humidity). There are several types:
Central humidifiers are built into home heating and air conditioning systems and are
designed to humidify the whole house.
Ultrasonic humidifiers produce a cool mist with ultrasonic vibration.
Impeller humidifiers produce a cool mist with a rotating disk.
Evaporators use a fan to blow air through a wet wick, filter or belt.
Steam vaporizers use electricity to create steam that cools before leaving the machine.
Avoid this type of humidifier if you have children; hot water inside this type of
humidifier may cause burns if spilled.
104
Ideal humidity levels
Humidity is the amount of water vapor in the air. The amount of humidity varies depending on
the season, weather and where you live. Generally, humidity levels are higher in the summer and
lower during winter months. Ideally, humidity in your home should be between 30 and 50
percent. Humidity that's too low or too high can cause problems.
Low humidity can cause dry skin, irritate your nasal passages and throat, and make your
eyes itchy.
High humidity can make your home feel stuffy and can cause condensation on walls,
floors and other surfaces that triggers the growth of harmful bacteria, dust mites and
molds. These allergens can cause respiratory problems and trigger allergy and asthma
flare-ups.
Measurement of humidity
The best way to test humidity levels in your house is with a hygrometer. This device, which
looks like a thermometer, measures the amount of moisture in the air. Hygrometers can be
purchased at hardware stores and department stores. When buying a humidifier, consider
purchasing one with a built-in hygrometer (humidistat) that maintains humidity within a healthy
range.
105
RESPIRATORY MEASUREMENT SYSTEM
APNOEA MONITOR:
(APNOEA DETECTORS)
Apnoea is the cessation of breathing which may precede the arrest of the heart and
circulation in several clinical situations such as head injury, drug overdose,
anaesthetic complications and obstructive respiratory diseases.
Apnoea may also occur in premature babies during the first weeks of life because of
their immature nervous system.
If apnoea persists for a prolonged period, brain function can be severely damaged.
Therefore, apnoeic patients require close and constant observation of their respiratory
activity.
Apnoea monitors are particularly useful for monitoring the respiratory activity of
premature infants.
Several contactless methods are available for monitoring the respiration of infants.
The most successful apnoea monitors to-date have been the mattress monitors.
106
These instruments rely for their operation on the fact that the process of breathing
redistributes an infant’s weight and this is detected by some form of a pressure sensitive
pad or mattress on which the infant is nursed.
The mattress, in its simplest form, is a multi-compartment air bed, and in this case the
weight redistribution forces air to flow from one compartment to another.
The air flow, is detected by the cooling effect it produces on a heated thermistor bead.
Though the technique is simple, the main disadvantage with the air mattress is the
short-term sensitivity variation and the double peaking effect when inspiration or
expiration produce separate cooling of the thermistor.
Alternatively, a capacitance type pressure sensor in the form of a thin square pad is
usually placed under or slightly above the infant’s head.
Respiratory movements produce regular pressure changes on the pad and these alter
the capacitance between the electrode plates incorporated in the pad.
This capacitance change is measured by applying a 200 kHz signal across the
electrodes and by detecting the current flow with a phase-sensitive amplifier.
Two types of electrodes can be used: (i) 70 mm plates, 350 mm apart in a plastic tube
which is placed alongside the body; (ii) 250 mm long, 60 mm diameter cylinders placed
one on either side of the body.
This system is much too sensitive to people moving nearby and thus an electrically
screened incubator is essential for the infant.
Apnoea monitors are generally designed to give audio-visual signals under apnoeic
conditions when no respiration occurs within a selectable period of 10, 20 or 30 s.
The apnoea monitors are basically motion detectors and are thus subject to other
motion artefacts also which could give false readings.
The instruments must, therefore, provide means of elimination of these error sources.
Fig. 1.5 shows a block diagram of an apnoea monitor.
107
Fig. 1.5 Block Diagram of Apnoea Monitor
The input circuit consists of a high input impedance amplifier which couples the
input signal from the sensor pad to the logic circuits.
The output of the amplifier is adjusted to zero volts with offset adjustment provided in
the amplifier.
The amplified signal goes to motion and respiration channels connected in parallel.
In the case of motion signals, high level signals above a fixed threshold are detected
from the sensor.
Low frequency signals below 1.5 Hz (respiration) cause the output of the Schmidt
trigger circuit to pulse at the respiration rate.
Higher frequency signals, above 1.5 Hz (motion), cause the output of the trigger to go
positive.
Absence of the signal (apnoea) causes the output of the Schmidt trigger to go
negative.
108
The outputs of the motion and the respiration signals are combined in a comparator
circuit, which compares the polarities of the motion and respiration channel signals to
indicate respiration.
The output of the discrimination detector also goes to an apnoea period selector
circuit, a low frequency alarm oscillator and driver, a tone oscillator and audio
amplifier connected to a speaker.
VENTILATOR (Unit-4)
Introduction
Medical ventilators are sometimes colloquially called "respirators," a term which stems
from commonly used devices in the 1950s (particularly the "Bird Respirator"). However, in
modern hospital and medical terminology, these machines are never referred to as respirators,
and use of "respirator" in this context is now a deprecated anachronism which signals technical
unfamiliarity.
Function
109
In its simplest form, a modern positive pressure ventilator consists of a compressible air
reservoir or turbine, air and oxygen supplies, a set of valves and tubes, and a disposable or
reusable "patient circuit". The air reservoir is pneumatically compressed several times a minute
to deliver room-air, or in most cases, an air/oxygen mixture to the patient. If a turbine is used, the
turbine pushes air through the ventilator, with a flow valve adjusting pressure to meet patient-
specific parameters. When overpressure is released, the patient will exhale passively due to the
lungs' elasticity, the exhaled air being released usually through a one-way valve within the
patient circuit called the patient manifold. The oxygen content of the inspired gas can be set from
21 percent (ambient air) to 100 percent (pure oxygen). Pressure and flow characteristics can be
set mechanically or electronically.
Ventilators may also be equipped with monitoring and alarm systems for patient-related
parameters (e.g. pressure, volume, and flow) and ventilator function (e.g. air leakage, power
failure, and mechanical failure), backup batteries, oxygen tanks, and remote control. The
pneumatic system is nowadays often replaced by a computer-controlled turbo pump.
The patient circuit usually consists of a set of three durable, yet lightweight plastic tubes,
separated by function (e.g. inhaled air, patient pressure, exhaled air). Determined by the type of
ventilation needed, the patient-end of the circuit may be either noninvasive or invasive.
Noninvasive methods, which are adequate for patients who require a ventilator only
while sleeping and resting, mainly employ a nasal mask. Invasive methods require intubation,
which for long-term ventilator dependence will normally be a tracheotomy cannula, as this is
much more comfortable and practical for long-term care than is larynx or nasal intubation.
Life-critical system
Because the failure of a mechanical ventilation system may result in death, it is classed as a life-
critical system, and precautions must be taken to ensure that mechanical ventilation systems are
highly reliable. This includes their power-supply provision.
Mechanical ventilators are therefore carefully designed so that no single point of failure can
endanger the patient. They may have manual backup mechanisms to enable hand-driven
respiration in the absence of power (such as the mechanical ventilator integrated into an
anaesthetic machine). They may also have safety valves, which open to atmosphere in the
absence of power to act as an anti-suffocation valve for the spontaneously breathing patient.
Some systems are also equipped with compressed-gas tanks, air compressors, and/or backup
110
batteries to provide ventilation in case of power failure or defective gas supplies, and methods to
operate or call for help if their mechanisms or software fail.
Biphasic
cuirass
ventilation
Main article: Biphasic cuirass ventilation
Biphasic cuirass ventilation (BCV) is a method of ventilation which requires the patient to
wear an upper body shell or cuirass, so named after the body armor worn by medieval soldiers.
The ventilation is biphasic because the cuirass is attached to a pump which actively controls both
the inspiratory and expiratory phases of the respiratory cycle. This method has also been
described as 'negative pressure ventilation' (NPV), 'external chest wall oscillation' (ECWO),
'external chest wall compression' (ECWC) and 'external high frequency oscillation' (EHFO).
BCV may be considered a refinement of the iron lung ventilator. Biphasic cuirass ventilation was
developed by Dr Zamir Hayek, a pioneer in the field of assisted ventilation. Some of Dr Hayek's
previous inventions include the Hayek Oscillator, an early form of the technology.
111
As a part of intensive care, the patients often require assistance with breathing.
When artificial ventilation is required for a long time, a ventilator is used to provide
oxygen enriched, medicated air to a patient at a controlled temperature.
(i) Controlled
(ii) Assisted
(iii) Assist-control
It provides inspirations and expirations at fixed rates except during the rest period for
the patient.
In the case of assisted breathing, the patient’s own spontaneous attempt to breathe in
causes the ventilator to cycle on during inspiration.
Thus it is used for the patient who has difficult breathing due to high air way
resistance.
There are servo controlled ventilators which can switch automatically to any mode
depending upon the condition of the patient.
By this mode, the patient controls his own breathing as long as he can, but if he should
failed to do so, the control mode is able to take over for him.
(a) Adequate ventilation by which enough oxygen is supplied and the right amount of
carbon dioxide is eliminated. Thus hyperventilation which creates respiratory
alkalosis and hypoventilation which creates respiratory acidosis are avoided.
(c) Increased intra thoracic pressure which prevents atelectasis that is collapse of portions
of the lung and counteracts edema of the lung.
112
Every ventilator operates cyclically. During insufflation or inspiration air or some
other gaseous mixture is pumped into the lungs.
The regulation is obtained by pressure limited, volume limited and servo controlled
systems.
Pressure limited ventilators are based on the principle that the insufflation is
terminated when the gaseous mixture is pumped into the patient’s lungs reaches a
pre-set pressure.
Pressure-limited ventilators are driven by the compressed gaseous mixture use for
ventilation.
The Fig. 1.7 shows the functional diagram of a positive pressure ventilator.
113
Fig. 1.7 Functional Diagram of a Positive Pressure Ventilator
These are based on the principle that for each breath, a constant volume of air is
delivered.
During insufflation, the constant volume of air is sent into the lungs by applying
pressure to a chamber containing constant volume.
The volume limited ventilators do not give the desired ventilation in cases where the
pre-set maximum pressure cannot completely empty the chamber.
This is based on the usage of modern electronic control techniques such that the flow
to and from the patient is controlled by feedback circuits.
The electronic unit controls the amplifiers and logic circuits that control the
ventilation.
1. During patient inspiration, the compressor draws room air through an air filter and
passes it to the main solenoid.
114
2. Main solenoid forces the bottom inlet valve of the internal bellows chamber to open
and the lower outlet valve to close.
5. When the medicated air is forced into lungs through the valve number 1, the
spirometer is in closed condition. When the inspiration is complete, the main solenoid
switches the directions of the pneumatic air to do the expiration cycle.
6. After the end of patient expiration, the system electronics trip the main solenoid,
thereby initiating the patient inspiration part of the cycle. Nowadays the
microprocessor based control circuits are used in the ventilator system to improve the
system’s reliability and accuracy.
Fig. 1.8 shows the microprocessor based automatic feedback control of a mechanical
ventilator.
The input signals to the microprocessor are obtained from a CO2 analyser, a lung
machine, gas analyser, oxygen consumption monitor and the servo ventilator.
The proper controlling signals are delivered to the servo ventilator so as to get correct
ventilation adjustment in response to patient’s metabolism.
115
Fig. 1.8 Microprocessor Based Venilator
It limits the expiratory phase time if the patient does not initiate the inspiratory phase
and is common to ventilators used for assisted ventilation.
Cycling control of a ventilator is the device which determines the change from the
inspiratory phase to the expiratory phase and vice versa.
The cycling of a ventilator may be based upon different factors such as pressure,
volume, time and the inspiratory effort made by the patient.
Volume Cycled: A ventilator which starts the expiratory phase after a preset tidal
volume has been delivered into the patient circuit. This device normally has a pressure
over-ride valve so that if, while the machine is in the process of administering the set
volume, the pressure exceeds a predetermined maximal value, the ventilator will
cycle whether or not the appropriate volume has been administered.
Pressure Cycled: A ventilator which begins the expiratory phase after a preset
pressure has been attained.
116
Time Cycled: A ventilator which initiates the expiratory phase after a preset time
period for the inspiratory phase has passed.
Pressure Cycled: A ventilator which begins the inspiratory phase after a pre-set end
expiratory pressure has been attained.
Time Cycled: A ventilator which initiates the inspiratory phase after a preset time
period for the expiratory phase has passed.
Patient Inspiratory Effort Cycled: A ventilator which starts the inspiratory phase in
response to the inspiratory effort.
HUMIDIFIERS:
Apart from ventilation, humidification of the breathing gas plays a leading role in the
intensive care of patients.
The main task of a humidifier is to replace humidity in the upper air passages which
has been lost by intubation.
The humidity should be as close to 100% as possible, or speaking in terms of water, the
absolute content per litre breathing gas should be more than 30 mg, regardless of
environmental conditions.
Therefore, in order to prevent damage to the patient’s lungs, the air or oxygen applied
during respiratory therapy must be humidified.
Thus, all ventilators include arrangements to humidify the air, either by heat
vapourization (stream) or by bubbling an air stream through a jar of water.
NEBULIZERS:
When water or some type of medication suspended in the inspired air as an aerosol is
to be administered to the patient, a device called a nebulizer is used.
In this device, the water or medication is picked up by a high velocity jet of air/oxygen
and made to impact against one or more baffles to break the substance into controlled-
sized droplets which are then applied to the patient via a respirator.
More effective and efficient nebulizers are based on the use of high intensity
ultrasound energy which vibrates the substance (water or medication) to produce a
high volume of minute particles.
Ultrasonic nebulizers do not depend upon breathing gas for operation and thus
therapeutic agents can be conveniently administered during ventilation procedure.
117
ASPIRATORS:
Aspirators are often included as part of a ventilator to remove mucus and other fluids
from the airways.
Alternatively, a separate suction device may be utilized to achieve the same purpose.
INHALATORS:
The term inhalator generally indicates a device used to supply oxygen or some other
therapeutic gases to a patient who is able to breathe spontaneously without assistance.
As a rule, inhalators are used when a concentration of oxygen higher than that of the
air is required.
The inhalator consists of a source of the therapeutic gas, and a device for
administering the gas.
Devices for administering oxygen to patients include nasal cannulae and catheters,
face masks that cover the nose and mouth, and, in certain settings, such as paediatrics,
oxygen tents.
The oxygen concentration presented to the patient is controlled by adjusting the flow of
gas into the mask.
UNIT V
SLIT LAMP
The slit lamp is an instrument consisting of a high-intensity light source that can be focused to
shine a thin sheet of light into the eye. It is used in conjunction with a biomicroscope. The lamp
facilitates an examination of the anterior segment, or frontal structures and posterior segment, of
the human eye, which includes the eyelid, sclera, conjunctiva, iris, natural crystalline lens, and
cornea. The binocular slit-lamp examination provides a stereoscopic magnified view of the eye
structures in detail, enabling anatomical diagnoses to be made for a variety of eye conditions. A
second, hand-held lens is used to examine the retina.
History
To fully understand the development of the slit lamp one must consider that with this invention
and its improvements, it had to be accompanied by the introduction of new examination
techniques. Two conflicting trends emerged in the development of the slit lamp. One trend
originated from clinical research and aimed at an increase in functions and the introduction and
application of the increasingly complex and advanced technology of the time[1] The second
trend originated from ophthalmologic practice and aimed at technical perfection and a restriction
to useful methods and the applications of the instrument. The first man credited with
118
developments in this field was Hermann Von Helmholtz (1850) when he invented the
ophthalmoscope.[2]
In ophthalmology and optometry, the term “slit lamp” is the most commonly referred to term
however it would be more correct to call it the “slit lamp instrument”.[3] Today’s instrument
however is a combination of two separate developments in instruments. The two developments
are the corneal microscope and that of the slit lamp itself. Though the slit lamp is a combination
of these two developments, the first concept of the slit lamp dates back to 1911 credited to Alvar
Gullstrand and his “large reflection-free ophthalmoscope.
procedure
While a patient is seated in the examination chair, they rest their chin and forehead on a support
to steady the head. Using the biomicroscope, the ophthalmologist or optometrist then proceeds to
examine the patient's eye. A fine strip of paper, stained with fluorescein, a fluorescent dye, may
be touched to the side of the eye; this stains the tear film on the surface of the eye to aid
examination. The dye is naturally rinsed out of the eye by tears.
A subsequent test may involve placing drops in the eye in order to dilate the pupils. The drops
take about 15 to 20 minutes to work, after which the examination is repeated, allowing the back
of the eye to be examined. Patients will experience some light sensitivity for a few hours after
this exam, and the dilating drops may also cause increased pressure in the eye, leading to nausea
and pain. Patients who experience serious symptoms are advised to seek medical attention
immediately.
Adults need no special preparation for the test; however children may need some preparation,
depending on age, previous experiences, and level of trust.
Variations in methods
Observation with an optical section or direct focal illumination is the most frequently applied
method of examination with the slit lamp. With this method, the axes of illuminating and
viewing path intersect in the area of the anterior eye media to be examined, for example, the
individual corneal layers[10]
If media, especially that of the cornea, are opaque, optical section images are often impossible
depending on severity. In these cases, direct diffuse illumination may be used to advantage. For
this, the slit is opened very wide and a diffuse, attenuated survey illumination is produced by
inserting a ground glass screen or diffuser in the illuminating path.[11] "Wide beam"
illumination is the only type that has the light source set wide open. Its main purpose is to
illuminate as much of the eye and its adnexa at once for general observation.[12]
119
Indirect illumination
With this method, light enters the eye through a narrow to medium slit (2 to 4 mm) to one side of
the area to be examined. The axes of illuminating and viewing path do not intersect at the point
of image focus, to achieve this; the illuminating prism is decentered by rotating it about its
vertical axis off the normal position. In this way, reflected, indirect light illuminates the area of
the anterior chamber or cornea to be examined. The observed corneal area then lies between the
incident light section through the cornea and the irradiated area of the iris. Observation is thus
against a comparatively dark background.[13]
] Retro-illumination
In certain cases, illumination by optical section does not yield sufficient information or is
impossible. This is the case, for example, when larger, extensive zones or spaces of the ocular
media are opaque. Then the scattered light that is not very bright normally is absorbed. A similar
situation arises when areas behind the crystalline lens are to be observed. In this case the
observation beam must pass a number of interfaces that may reflect and attenuate the light.[13]
With this type of illumination, a wide light beam is directed onto the limbal region of the cornea
at an extremely low angle of incidence and with a laterally de-centered illuminating prism.
Adjustment must allow the light beam to transmit through the corneal parenchymal layers
according to the principle of total reflection allowing the interface with the cornea to be brightly
illuminated. The magnification should be selected so that the entire cornea can be seen at a
glance.[14]
Fundus (eye) observation is known by the ophthalmic and the use of fundus cameras. With the
slit lamp, however, direct observation of the fundus is impossible due to the refractive power of
the ocular media. In other words: the far point of the eye (punctumremotum) is so distant in front
of (myopia) or behind (hyperopia) that the microscope cannot be focused. The use of auxiliary
optics - generally as a lens – makes it possible however to bring the far point within the focusing
range of the microscope. For this various auxiliary lenses are in use that range in optical
properties and practical application.[15]
Interpretation
The slit lamp exam may detect many diseases of the eye, including:
• Cataract
• Conjunctivitis
120
• Diabetic retinopathy
• Fuchs' dystrophy
• Macular degeneration
• Presbyopia
• Retinal detachment
• Retinitis pigmentosa
• Sjögren's syndrome
• Toxoplasmosis
• Uveitis
One sign that may be seen in slit lamp examination is a "flare", which is when the slit-lamp beam
is seen in the anterior chamber. This occurs when there is breakdown of the blood-aqueous
barrier with resultant exudation of protein.
A tonometer is a tool used to check the pressure exerted by the fluid inside a person's eyes in
terms of millimeters of mercury (mmHg). This is done to make sure the eyes and optic nerves are
healthy. There are several different types, including those that touch the eyeball directly, those
that only touch the eyelid, and those that don't touch the eye at all. Though most are usually very
accurate, some things can cause inaccurate readings.
Purpose
The eyes are filled with fluid, which exerts pressure on the optic nerves and the outside of the
eyeball. This is called intraocular pressure (IOP), and measures between 10 mmHg and 21
mmHg in most healthy humans. Having a too high IOP is a very common sign of glaucoma, but
it can also be a symptom of an inflamed iris or retinal detachment. A too low IOP can be a sign
that fluid is leaking from the eye or that the eye is not producing enough fluid to keep up with
normal drainage. This can increase a person's risk for cataracts and retinal detachment, and often
leads to a decrease in vision. Optometrists often use tonometers to screen for these conditions
and monitor those with known eye problems, particularly glaucoma.
AdChoices
121
AutorefractorKeratometer
FreeShipping.HighQuality.Guarantee Big Deal. Great Oportunity.
Buy Now
www.AutoRefractors-Keratometers.com
Use Olay Total Effects
Fight the 7 Signs of Ageing Today Visit Our Site to Know More!
www.olay.in
70% Off On BP Monitor
Rs 1140 fuzzy logic upper arm unbeatable price, Free shipping COD
www.healthgenie.in/bp-monitor
Free Ophthalmology
Advanced Therapeutic Management of Ocular Surface Disease.
CME
www.Healio.com/OphthalmologyCME
Mehta Intl Eye Institute
State of the art Surgical Facility LASIK, Cataract, Glaucoma and
more
Main Types
Many tonometers measure IOP by pressing or bouncing a device against the cornea, which is the
front part of the eyeball that covers the iris, the pupil, and a small chamber containing fluid.
Though these are very commonly used, some people don't like them because they usually require
the use of numbing drops in the eyes. Common types of corneal contact tonometers include the
following:
Goldmann: This is considered the industry standard for tonometry, and works by
touching the end of the device to the cornea to measure IOP. This process is called
applantation. Perkins and Maklakovtonometers can also be used to do this.
PASCAL Dynamic Contour Tonometer (DCT): The device works by placing a small,
pressure-sensitized concave onto the cornea.
Tono-Pen/Accu-Pen: This type comes in a pen shape and works by means of electronic
indentation tonometry, measuring IOP with an electronic transducer.
Icare: This measures IOP by bouncing a small probe against the cornea. The recoil
creates an induction current, which can be used to measure IOP. This method is called
rebound tonometry.
Schiötz: A device that works by means of impression tonometry, a process in which the
optometrist measures the depth of the impression a small plunger makes on the cornea.
There are also devices that measure IOP through the eyelid, as opposed to actually touching the
cornea. The most common type is the Diaton tonometer, which works by bouncing a rod off of
the eyelid, then measuring the resulting rebound. Some people prefer this method because it
usually doesn't involve anesthetic drops.
Some tools work without touching the eye at all. This is known as non-contact or "air puff"
tonometry, since most non-contact versions work by shooting a small puff of air at the cornea,
122
and then measuring the force needed to flatten it. Unlike most corneal contact tools, air puff
devices do not usually require eye drops, and the results are available within seconds. Another
type is an Ocular Response Analyzer (ORA), which uses two puffs of air to measures the
difference between the pressure on the cornea as it's going inward and then as it returns to its
normal shape.
Mitigating Factors
The accuracy of a tonometer reading can be affected by several factors. People tend to have
slight differences in the thickness and hardness of their corneas, so a person with a particularly
hard cornea might have an abnormally high IOP reading but still be healthy. Other factors, like
illness, eye inflammation, caffeine consumption, or exercise can also influence a person's IOP.
Eye doctors may have a hard time getting a measurement if the person moves around during the
procedure, which is why air puff, Icare, or Diaton models are usually used for children, people
who are uncomfortable with items touching the eye, and those who prefer not to use eye drops.
REFRACTOMETER
Refractometry
Standard refractometers measure the extent of light refraction (as part of a refractive index) of
transparent substances in either a liquid or solid state; this is then used in order to identify a
liquid sample, analyse the sample's purity and determine the amount or concentration of
dissolved substances within the sample. As light passes through the liquid from the air it will
slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount
of substance dissolved in the liquid. For example, the amount of sugar in a glass of water
Types of refractometers
There are four main types of refractometers: traditional handheld refractometers, digital handheld
refractometers, laboratory or Abbe refractometers (named for the instrument's inventor and based
on Ernst Abbe's original design of the 'critical angle') and inline process refractometers.[2] There
is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases.
In laboratory medicine, a refractometer is used to measure the total plasma protein in a blood
sample and urine specific gravity in a urine sample.
In drug diagnostics, a refractometer is used to measure the specific gravity of human urine.
In gemology, the gemstone refractometer is one of the fundamental pieces of equipment used in
a gemological laboratory. Gemstones are transparent minerals and can therefore be examined
using optical methods. Refractive index is a material constant, dependent on the chemical
composition of a substance. The refractometer is used to help identify gem materials by
123
measuring their refractive index, one of the principal properties used in determining the type of a
gemstone. Due to the dependence of the refractive index on the wavelength of the light used (i.e.
dispersion), the measurement is normally taken at the wavelength of the sodium line D-line
(NaD) of ~589 nm. This is either filtered out from daylight or generated with a monochromatic
light-emitting diode (LED). Certain stones such as rubies, sapphires, tourmalines and topaz are
optically anisotropic. They demonstrate birefringence based on the polarisation plane of the light.
The two different refractive indexes are classified using a polarisation filter. Gemstone
refractometers are available both as classic optical instruments and as electronic measurement
devices with a digital display.[3]
In marine aquarium keeping, a refractometer is used to measure the salinity and specific gravity
of the water. In the automobile industry, a refractometer is used to measure the coolant
concentration. In the machine industry, a refractometer is used to measure the amount of coolant
concentrate that has been added to the water-based coolant for the machining process. In
homebrewing, a brewing refractometer is used to measure the specific gravity before
fermentation to determine the amount of fermentable sugars which will potentially be converted
to alcohol.
Brix refractometers are often used by hobbyists for making preserves including jams,
marmalades and honey. In beekeeping, a brix refractometer is used to measure the amount of
water in honey.
Automatic refractomete
Automatic refractometers automatically measure the refractive index of a sample. The automatic
measurement of the refractive index of the sample is based on the determination of the critical
angle of total reflection. A light source, usually a long-life LED, is focused onto a prism surface
via a lens system. An interference filter guarantees the specified wavelength. Due to focusing
light to a spot at the prism surface, a wide range of different angles is covered. As shown in the
figure "Schematic setup of an automatic refractometer" the measured sample is in direct contact
with the measuring prism. Depending on its refractive index, the incoming light below the
critical angle of total reflection is partly transmitted into the sample, whereas for higher angles of
124
incidence the light is totally reflected. This dependence of the reflected light intensity from the
incident angle is measured with a high-resolution sensor array. From the video signal taken with
the CCD sensor the refractive index of the sample can be calculated. This method of detecting
the angle of total reflection is independent on the sample properties. It is even possible to
measure the refractive index of optically dense strongly absorbing samples or samples containing
air bubbles or solid particles . Furthermore, only a few microliters are required and the sample
can be recovered. This determination of the refraction angle is independent of vibrations and
other environmental disturbances.
Influence of wavelength
The refractive index of a given sample varies with wavelength for all materials. This dispersion
relation is nonlinear and is characteristic for every material. In the visible range, a decrease of the
refractive index comes with increasing wavelength. In glass prisms very little absorption is
observable. In the infrared wavelength range several absorption maxima and fluctuations in the
refractive index appear. To guarantee a high quality measurement with an accuracy of up to
0.00002 in the refractive index the wavelength has to be determined correctly. Therefore, in
modern refractometers the wavelength is tuned to a bandwidth of +/-0.2 nm to ensure correct
results for samples with different dispersions.
Influence of temperature
Temperature has a very important influence on the refractive index measurement. Therefore, the
temperature of the prism and the temperature of the sample have to be controlled with high
precision. There are several subtly-different designs for controlling the temperature; but there are
some key factors common to all, such as high-precision temperature sensors and Peltier devices
to control the temperature of the sample and the prism. The temperature control of these devices
should be designed so that the variation in sample temperature is small enough that it will not
cause a detectable refractive-index change.
Automatic refractometers are microprocessor-controlled electronic devices. This means they can
have a high degree of automation and also be combined with other measuring devices
Flow cells
There are different types of sample cells available, ranging from a flow cell for a few microliters
to sample cells with a filling funnel for fast sample exchange without cleaning the measuring
prism in between. The sample cells can also be used for the measurement of poisonous and toxic
samples with minimum exposure to the sample. Micro cells require only a few microliters
volume, assure good recovery of expensive samples and prevent evaporation of volatile samples
or solvents. They can also be used in automated systems for automatic filling of the sample onto
the refractometer prism. For convenient filling of the sample through a funnel, flow cells with a
filling funnel are available. These are used for fast sample exchange in quality control
applications.
125
Automatic refractometer with sample changer for automatic measurement of a large number of
samples -
Once an automatic refractometer is equipped with a flow cell, the sample can either be filled by
means of a syringe or by using a peristaltic pump. Modern refractometers have the option of a
built-in peristaltic pump. This is controlled via the instrument‘s software menu. A peristaltic
pump opens the way to monitor batch processes in the laboratory or perform multiple
measurements on one sample without any user interaction. This eliminates human error and
assures a high sample throughput.
SPEECH AUDIOMETRY(Unit-5)
Introduction
126
more than 12dB = conductive hearing loss. If it.s less than 12dB then = sensory
and If lower than that then the hearing is neural hearing loss.
To serve as reference point for deciding in appropriate level at which to
administer supra threshold speech recognition test.
Each sound consists of particular frequency and to perceive the particular frequency,
intensity is required. Each sound in each place is different. Noise differs in terms of manner of
articulation rather than the place of articulation. Each sound has got 3 parameters:
• Frequency
• Intensity
• Duration of sound
These 3 are different for each speech sounds. It differs from individual to individual, time
to time, and also varies depending upon proceeding sound and following sound. If this is
presented on the audiogram it will be called as .Articulation Index. Speech spectrum level of
normal speaker from 1 meter distance is 65dB SPL. Speech spectrum is also called as Speech
Banana.
127
Need for speech audiometry
The PTA doesn’t talk about the communication ability of the person. Communication
ability of two persons is not same. Thus speech audiometry helps in different aspects such as:
• To cross-check the pure tone threshold.
• To find out the type of hearing loss
• To find out the degree of hearing loss
• Help in hearing aid selection
• Help in identifying functional hearing loss
• Helps in identifying the site of lesion.
128
Close Vs Open set (closed set is using head phone; Open set is free field situation. In
open set patient respond through better ear. So closed set has better response than open
set.)
Carrier Phase (means the instruction or phase which precedes the stimulus words during
speech audiometry. Designed to prepare the patient for the test)
Types of hearing loss (Conductive performs better than Sensory. Sensory performs better
than Neural)
Introduction
Pure tone threshold audiogram is graphs of the lowest signal intensity as a function of
signal frequency which can the person under test hear during certain measurement conditions.
The level is usually drawn as the amount of deviation from the standard threshold value (for air-
conducted signals presented via earphones the threshold was published in ISO 389-8:2004 and
129
ISO 389-5:2006 standard) and thus it is easily visible if the person's hearing is in normal range.
Audiometer is the calibrated clinical device which is used for the audiogram measurement.
Audiogram measurement
In the method of constant stimuli the series of tones of certain frequency and level is
presented to the listener whose task is to respond if he perceives the signal. If the number of
responses is equal to half of the signal presentations, the signal level is set as threshold. This
method is the most accurate but also the most time consuming
In the method of adjustment, the listener can control level of the signal and is instructed to set it
to the just-barely heard value. If it were set to lower level, the listener would not be able to hear
it at all. This just-barely heard value is taken as threshold. This method is the most inaccurate
and least time consuming.
In the method of limits, the signal of certain frequency and level is presented to the
listener and his response is recorded. The signal level is changed and the response is again
recorded. The threshold is the lowest level where the response was at least in 50 % of the
presentations. Since the accuracy of this method somewhere in the middle between the accuracy
of two other methods and it is less time consuming then the method of constant stimuli, it is
suitable for manual audiometry
130
Preparation of test subjects
The tester shall adopt an effective communication strategy with the subject throughout.
This must take account of the subject’s age, hearing, language skills and any other possible
communication difficulties. Any significant communication problems shall be recorded as these
may affect the subject’s performance. Audiometry shall be preceded by otoscopic examination
(see BSA, 2010) and the findings recorded, including the presence of wax. Occluding wax may
be removed prior to audiometry but if wax is removed the procedure shall only be undertaken by
someone who is qualified and competent to do so.
Test time
Care should be taken not to fatigue the subject as this can affect the reliability of the test
results. If the test time exceeds 20 minutes, subjects may benefit from a short break.
Subject’s response
The subject’s response to the test tone should clearly indicate when the test tone is heard
and when it is no longer heard. The response system should be inaudible, with a response button
connected to a signal light the preferred method. When testing younger children, adults with
learning difficulties or subjects with attention difficulties a more engaging response method may
be required, and if so this shall be recorded.
Earphones
There are three main types of transducers that can be used for air-conduction audiometry:
supra-aural, circum-aural and insert earphones. Supra-aural earphones (e.g. Telephonic TDH39
and TDH49) rest on the ear and have traditionally been used for a-c audiometry. Circum-aural
earphones (Sennheiser HDA200) surround and cover the entire ear. However, both supra- and
circumaural earphones can be cumbersome, particularly when used for masking bone conduction
thresholds, and may cause the ear canal to collapse. Insert earphones (e.g. Etymotic Research
ER3 and ER5) use a disposable foam tip for directing the sound straight into the ear canal and
therefore prevent the ear canal from collapsing. Insert earphones are also associated with less
transcranial.
131
Test order
Start with the better-hearing ear (according to the subject’s account) and at
1000 Hz. Next, test 2000 Hz, 4000 Hz, 8000 Hz, 500 Hz and 250 Hz in that order. Then, for the
first ear only, retest at 1000 Hz. If the retest value is no more than 5 dB different from the
original value take the more sensitive threshold as the final value, but if the retest value differs
from the original value by more than 5 dB then the reason for the variation shall be investigated.
The subject may need to be re-instructed and the full test repeated for that ear. Unusually
variable results shall be noted on the audiogram. Where needed and practicable, test also at
intermediate frequencies 750 Hz, 1500 Hz, 3000 Hz and 6000 Hz (3000 Hz and 6000 Hz may be
required in cases of high-frequency hearing loss). Test the opposite ear in the same order. The
retest at 1000 Hz is normally not required in the second ear unless tests in the first ear revealed
significant variation.
132
PSYCHO-PHYSIOLOGICAL MEASUREMENTS FOR TESTING AND SENSORY
RESPONSES (UNIT-5)
Introduction
Advances in computer technologies have improved people’s multi-tasking performance.
However, human attention is a finite resource and the benefit of being able to process multiple
streams of information comes with a cost. Cognitive demands and limitations will ebb and flow
in situations of divided attention, due to an interruption of a primary task, or engaging in dual-
(or multi)-tasking, making the prediction of when information can be attended to particularly
hard.
For example, in the context of an interruption, attention switches from one task to
another, whether the interruption is relevant or a distraction. Consider, for example, a navigation
display that is deemed useful, annoying, or even dangerous as it continually delivers information
to the driver, or attending to an information stream on a mobile device while walking, driving, or
listening to a lecture. The ubicomp community can benefit greatly from learning the most salient
human measures of cognitive load. Such an understanding can help designers and developers
gauge when and how to best communicate information, particularly with the focus in ubicomp
on proactively and seamlessly providing the right information at the right time. Presenting
information at the wrong time can drastically increase one’s cognitive demands; can have
negative impacts on task performance and emotional state, and in extreme cases, even be life
threatening.
Cognitive load
Cognitive load is defined as a multidimensional construct representing the load that a
particular task imposes on the performer. This also refers to the level of perceived effort for
learning, thinking and reasoning as an indicator of pressure on working memory during task
execution. This measure of mental workload represents the interaction between task processing
demands and human capabilities or resources.
Subjective rating-based methods (self-reporting) both subjective and objective methods have
been used to assess a user’s cognitive load. We first discuss the subjective approaches. A number
of studies have found that post-hoc self-reports of cognitive load are a relatively reliable method
for assessing mental effort [34]. In fact, the most commonly used assessment for cognitive load
is the subjective NASA task load index (TLX) tool [17]. Despite widespread use of the NASA
TLX, other studies do not consider the self-reports to be reliable indicators of cognitive load
[e.g., 28]. The subjective, post-hoc nature of this assessment approach can make it difficult to
apply in ubicomp systems where automated and immediate assessment is often crucial. Task
Performance-based methods
133
Combinational methods
A few researchers have attempted to integrate behavioral models into a performance
model. This integration can help predict the performance effect of, for example, different phone
dialing interfaces, and driving steering tasks. While these approaches are very promising, they
require the creation of a sophisticated task model using, for example, ACT-R or GOMS that is
specific to the task being studied. Instead, we are interested in a more generalized method for
assessing cognitive load.
134
In this study, we focus on ‘visual perception’ and ‘cognitive speed’ among the human cognitive
abilities addressed in. These abilities highly engage spatial orientation or spatial attention, which
are highly leveraged in today’s world of location-based services, situations of divided attention,
and ubicomp applications where you may be attending to one activity (e.g., crossing the street)
and are either interrupted by incoming information (e.g., text from a friend or ad from a nearby
store) or seeking information (e.g., search for information on a car that just drove past). Based on
a review of a number of cognitive factors to assess the elementary cognitive abilities, we
identified the major discriminable first-order factors in the areas of visual perception or ‘major
spatial factors’ and cognitive speed. These factors are ‘flexibility of closure’ (CF), ‘speed of
closure’ (CS) and ‘perceptual speed’ (PS).
Test bed
A Java-based application was implemented for presenting the six ECTs to subjects. We
counterbalanced both the order of the ECT question types and the difficulty of the question sets
for each type using a Balanced Latin Square design. For each question set, the subject was given
3 minutes to review the question slides and answer the questions. If this set time was exceeded,
the subject was automatically directed to a task difficulty rating slide and the test continued with
the next set of questions. Before each question set, the subject was asked to close his/her eyes for
a brief period of mental relaxation. The application logs the subjects’ answers and ratings along
with a time stamp and current question set information (type and difficulty level), so that the
performance (task completion time and number of correct answers) can be analyzed.
Six ECTs
As stated earlier, we selected six ECTs that mapped onto the 3 contextual factors (speed
of closure, flexibility of closure and perceptual speed) identified earlier. The ECT contents and
scoring methods used, originated from conventional ECTs based in psychology and cognitive
science and were adapted to allow manipulation for task difficulty. We now describe the ECTs
we presented to our subjects. ECT1 - GC (Gestalt Completion) test this test measures the ‘speed
of closure (CS)’ factor [51]. The subject was asked to look at an incomplete line drawing and try
to identify it. For each level of difficulty, 5 unique images were presented, with the complexity
of the images higher in the high level of difficulty than in the low level. ECT2 - HP (Hidden
Pattern) test this test measures the ‘flexibility of closure (CF)’ factor.
135
Figure: Average Time-on-Task (sec) vs. Task Difficulty (Low/High). The number of participants
who did not finish a task within the time limit is in parentheses.
Psycho-Physiological sensors
In this study, we used four sensor devices – a contactless eye tracker, Body Media
armband, wireless EEG headset, and a wireless heart rate monitor – to measure the
psychophysiological signals from our participants during task execution. Three computers (main
tester, eye tracking system, headset reader) were used to collect data and had their clocks
synchronized to allow for data integration.
Contactless eye tracker
Earlier work has shown the value of tracking eye movements and changes in pupil size as
measures of cognitive load. We used a Smart Eye 5.5.2 eye tracking system
(http://www.smarteye.se) to detect and record the pupillometry (change in pupil size) of
participants. The system is comprised of two cameras (Sony XC-HR50 with 12 mm lenses) and
two Infrared (IR) flashes. The eye tracking system was calibrated for each participant, through a
standard eye profiling task.
Wireless HR monitor
Finally, HR and HRV were shown to have value in assessing cognitive load [10, 29, 52],
so we used a Polar RS800CX HR monitor (http://www.polar.fi/en) to collect interbeat interval
(IBI) information with an accuracy of 1 ms. The device is comprised of a wireless transmitter
attached to an elastic strap worn around the chest of the participant and a wrist worn training
computer that stores the collected data.
Data analysis
We now discuss our approach for analyzing the psychophysiological data for creating
models of cognitive load.
Data
We recorded six psycho-physiological signals with the four devices. These were the
interbeat interval signal measured with the HR monitor, galvanic skin response mean (32 Hz),
heat flux mean (1 Hz) and ECG MAD information (32 Hz) measured with the armband, pupil
136
diameter (60 Hz) measured by the eye tracker and EEG (128 Hz) measured with the headset. In
addition, the headset gave eight power values (1 Hz) and two mental state outputs (1 Hz) derived
from the raw signal.
Preprocessing
Before analysis, the heart rate IBI data was preprocessed by removing outliers falling
outside the range of 35-155 bpm (387-1714 ms). The GSR values were observed to have an
increasing trend at the beginning of each measurement caused by the properties of the
measurement device. This trend was removed and the lowest and highest 0.1 percent of values
from each participant were excluded as outliers.
Features
The level of cognitive load (low vs. high) was modeled using features derived from non-
overlapping segments of psycho-physiological sensor data corresponding to the different
questions in the ECT tests. Because the question sets in the Pursuit test were comprised of only
one question each, the data corresponding to these segments was divided into two parts to
increase the number of samples available for the modeling. Altogether 51 statistical features
were calculated from the psycho-physiological signals measured with the four devices. The
mean, variance and median of pupil diameter, GSR, heat flux, ECG MAD, 8 EEG power values
and two mental state outputs (attention and meditation) were calculated. Spectral power was also
calculated from the raw EEG signal on five bands (delta 0-4 Hz, theta 4-7 Hz, alpha 8-12
Hz beta 12-30 Hz and gamma over 13 Hz) to compare to the values calculated by the EEG
headset. Average powers for each of these were used as features. Two HRV features (standard
deviation of IBIs (SDNN) and the root mean square of the difference of successive IBIs
(RMSS)) and the mean and variance of HR were derived from the HR data.
Modeling
We then evaluated the performance of each of the features in assessing cognitive load.
Because of individual differences in the levels of psycho-physiological responses to cognitive
load, each participant was modeled individually. For each question type, the data from the
separate questions were classified into one of two classes representing the two difficulty levels.
Classification was performed based on one feature alone, using a Naïve Bayes classifier. We
used a leave-one-out validation approach between the questions in each question type to
calculate the average classification accuracy for the question type. Data from all but one of the
questions was used to train the classifier and the data from the remaining question was used to
evaluate the classification accuracy. This was repeated for all the questions in turn and the
accuracy for the question type was defined as the average of these accuracies (i.e., if a question
type presented 5 questions to the user, we averaged over the 5 leave-one-out results).
Introduction
137
GSR is a change in the electrical resistance of the skin that is a physiochemical response
to emotional arousal which increases sympathetic nervous system activity. The onset of the 21st
century is an incredibly exciting time in pain research. Information from recent studies in basic
pain research is virtually exploding and has revealed numerous novel targets in pain research.
Even though the pain is an alert system of our body, the body uses to defend itself from
destructive processes that occur from time to time; pain offers challenges to both, physician and
sufferer. For the sufferer, it is a hurt. He faces huge challenges in getting relieved from pain. For
the physician, it offers great challenges in terms of curing sufferer’s pain. In order to cure pain, a
physician must know the quantity of sufferer’s pain exactly. Otherwise, the physician may
prescribe a wrong dosage of pain killers; it may lead to side effects.
Therefore, it is essential to measure the pain objectively and we need some quantitative
indicator of pain. Though there are many quantitative indicators of pain in the literature, this
work aims only to check whether
Galvanic Skin Response (GSR) can be used as a valid pain indicator or not. GSR measures the
level of autonomic system activity by measuring the electrical resistance of the tissue path
between two electrodes applied to the skin. This technique has been extensively used in animal
and human research on pain.
At present there are no convincing studies to show the neurological changes during
hypnosis. Since GSR is a good measure of changes in autonomic activity, it has been used in this
work to validate hypnotic analgesic condition of subjects.
The term ‘pain’ in this paper is limited to only for mechanically stimulated pain and not the
actual pain. The experiments were done in laboratory set up, not in clinical set up.
Methodology
To analyze the relationship between GSR and pain we did ‘Pain-GSR’ (PG) experiments
by measuring GSR while controlled pain stimuli are applied (Fig.3). The physiology says that
there would be change in skin resistance due to pain. In majority of cases the GSR is recorded by
the change of electrical resistance of the skin to a direct current. This change in skin resistance is
due to perspiration. For design purpose the skin resistance has been considered as 250 KΩ. In
this work a circuit was developed to deliver constant current of 5 μ Ampere. The reason for
keeping the current as 5 μA is to minimize the polarization at the electrodes. A separate power
supply circuit, which supplies constant 5 volts, was also designed by using a voltage regulator.
The data were acquired by using Data Acquisition Card (PMD-1208FS DAQ) of
Measurement and Computing Corporation (MCC). The signal is acquired at a sampling rate of
256 Hz. The PMD DAQ card does not perform simultaneous A/D conversion on the each
channel. Instead, all the channels are multiplexed to a single A/D converter internally. Hence,
there is a time lag between different channels in the (pseudo) simultaneous sampling. The
voltage range of the A/D converter is ± 5V.
138
Circuit Description
The circuit developed for GSR
measurement is a simple constant current
source circuit with, an op-amp LM741 and
1MΩ resistors. The op-amp is operated in
non inverting mode. The constant voltage of
5V is supplied to the non inverting terminal
of op-amp through 1 MΩ resistor and the
inverting terminal is grounded through 1
MΩ. The 6th pin of op-amp is connected
with inverting and non inverting terminal
with 1 MΩ resistor. The current output is
taken out from the non inverting terminal.
This current is fed to the body of specific
location.
Results
The consolidated data from PG experiments of average values of conductivity for each
subject is are tabulated. From the tabular column, we can come to know that 5 among 7 subjects
have shown incremental change in conductivity in Pain stage. During In other two stages of
experiment, the conductivity is comparatively lower than the conductivity of During Pain stage.
This is graphically shown in fig.6. Even though the experiment was conducted in same ambient
conditions; the other two subjects have not shown the incremental change in conductivity. This
139
may because of the individual’s pathological conditions at the time of experiment. These results
show that any type of arousal is increasing the conductivity of skin-GSR [8]. From these results,
we can conclude that the relationship between GSR and Pain is vital to be considered and GSR
could be the indicator of pain. In the HPG experiment, we consider the output of the stages (i),
(ii) and (iv). By considering only these three stages, we can easily check the effect of hypnotic
analgesia. The consolidated results and statistical analysis report are given in table. From the
results, it is observed that there is a detrimental change in GSR in ‘During Hypnosis’ stage. It
shows that during hypnosis, the persons are getting relaxed
Conclusion
The specific but future goal of this work is to design and prototype a device which will
quantitatively measure the pain and control the administration of analgesic to the sufferers of
pain in order to reduce the side effects of the pain killers. And also this work aims to analyze the
changes in GSR during hypnotic analgesic condition. In the PG experiments, the conductivity of
5 subjects among 7 increased in the ‘During Pain’ stage. And the conductivity is less in the other
two stages of the same experiment. It shows a clear relation between Pain and GSR and there is a
scope for measuring pain objectively using GSR. However the relation between pain and other
physiological parameters have to be studied before accepting GSR as an indicator of pain. In the
HPG experiment, all the three subjects show less conductivity in the ‘Pre Pain without Hypnosis’
and ‘During Pain with Hypnosis’ stages. But in the ‘Pre Hypnosis with Pain’ stage of the same
experiment, the conductivity is higher than the other two stages. It shows that the subjects do not
feel pain during hypnotic analgesia, and GSR is a true indicator of pain. In this work both PG
and HPG experiments were done with induced pain, in spite of real pain. The induced pain may
not be the representative of all types of pains, but some type of pains. The future work include
experiments with more subjects in a clinical setup with actual pain rather than in a laboratory set
up with induced pain, and this could help us to design a device which will measure the pain
objectively, in order to alleviate the pain effectively with just required amount of pain killers.
140