0% found this document useful (0 votes)
22 views29 pages

Short Book Reviews

This book review summarizes three books related to time series analysis and Bayesian statistics. The first book is an introduction to time series analysis and forecasting using R. It carefully introduces both the mathematical theory and practical applications of time series models. The second book provides an introduction to Bayesian methods for social science students, covering the basics of Bayesian analysis and Markov chain Monte Carlo methods. The third book introduces morphometrics using R, covering topics like acquiring morphometric data, traditional statistics, modern morphometrics based on landmark configurations, and statistical analysis of outlines.

Uploaded by

javad shateryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views29 pages

Short Book Reviews

This book review summarizes three books related to time series analysis and Bayesian statistics. The first book is an introduction to time series analysis and forecasting using R. It carefully introduces both the mathematical theory and practical applications of time series models. The second book provides an introduction to Bayesian methods for social science students, covering the basics of Bayesian analysis and Markov chain Monte Carlo methods. The third book introduces morphometrics using R, covering topics like acquiring morphometric data, traditional statistics, modern morphometrics based on landmark configurations, and statistical analysis of outlines.

Uploaded by

javad shateryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

International Statistical Review (2009), 77, 2, 300–328 doi:10.1111/j.1751-5823.2009.00085.

Short Book Reviews


Editor: Simo Puntanen

Time Series Analysis With Applications in R, Second Edition


Jonathan D. Cryer, Kung-Sik Chan
Springer, 2008, xiv + 491 pages, € 69.95 / £ 55.99 / US$ 84.95, hardcover
ISBN: 978-0-387-75958-6

Table of contents
1. Introduction 9. Forecasting
2. Fundamental concepts 10. Seasonal models
3. Trends 11. Time series regression models
4. Models for stationary time series 12. Time series models of heteroscedasticity
5. Models for nonstationary time series 13. Introduction to spectral analysis
6. Model specification 14. Estimating the spectrum
7. Parameter estimation 15. Threshold models
8. Model diagnostics 16. Appendix: An introduction to R

Readership: Later year undergraduates, beginning graduate students, and researchers and
graduate students in any discipline needing to explore and analyze time series data.
Chapters 1–2 give examples of time series, and introduce the mathematics needed for working
with the variance-covariance structure of time series models. Chapter 3 compares stochastic
with deterministic trends, introduces regression methods, and discusses residual analysis.
Chapters 4–11 expound ARIMA processes, starting with autoregressive and moving average
processes in Chapter 4, and successively adding further refinements and technicalities in later
chapters. An appendix to Chapter 11, on Forecasting, has a very brief discussion of state space
models.
Chapter 12 describes the ARCH and GARCH models for heteroscedasticity that are popular
for use with financial time series.
Chapters 13 and 14 describe the use of spectral analysis. Chapter 15 describes threshold
models.
Chapters 11–15 are new to this second edition. (The first edition appeared in 1986.)
An appendix lists R commands that can be used to reproduce the analyses and simulations.
These, and the data sets, are also available from the book’s web site. Users of the R system will
find it easiest to obtain the data sets from the R package TSA. This package has a number of
additional functions that are tailored to make it easy to reproduce the results in the text.
This is a careful and staged introduction both to the mathematical theory and to practical
time series analysis. Simulation, and associated plots, are used throughout as an aid to intuition.
There is extensive detailed comment on practical issues that arise in the application of the


C 2009 The Authors. Journal compilation 
C 2009 International Statistical Institute. Published by Blackwell Publishing Ltd, 9600 Garsington Road,

Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
SHORT BOOK REVIEWS 301

methodologies. Exercises at the end of each chapter mix theory, simulation and data analysis,
with a bias towards data analysis.
John H. Maindonald: john.maindonald@anu.edu.au
Centre for Mathematics & Its Applications
Australian National University
Canberra ACT 0200, Australia

Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition


Jeff Gill
Chapman & Hall/CRC, 2007, 752 pages, £ 46.99 / US$ 73.95, hardcover
ISBN: 978-1-58488-562-7

Table of contents

1. Background and introduction 9. Basics of Markov chain Monte Carlo


2. Specifying Bayesian models 10. Bayesian hierarchical models
3. The normal and Student’s t models 11. Some Markov chain Monte Carlo theory
4. The Bayesian linear model 12. Utilitarian Markov chain Monte Carlo
5. The Bayesian prior 13. Advanced Markov chain Monte Carlo
6. Assessing model quality Appendix A: Generalized linear model review
7. Bayesian hypothesis testing and the Bayes’ factor Appendix B: Common probability distributions
8. Monte Carlo methods Appendix C: Introduction to the BUGS language

Readership: The book will be very suitable for students of social science, for example political
science, who wish to analyze data using modern Bayesian methods.
One is tempted to think of this book as essentially consisting of two parts, Chapters 1 through 7
on basics of Bayesian Analysis, and Chapters 8 through 13 on MCMC and Hierarchical Bayesian
Analysis. However, this simplistic view is not wholly correct, since MCMC calculations are used
throughout the first six chapters at least by way of illustration. But this conceptual division into
two parts would help a reader as well as a teacher using it as a text.
Chapter 1 is a beautifully written introduction to the whole subject of Bayesian Analysis.
Along with some more material from the other chapters, it could serve as a crash course on
Bayesian Analysis. I have a small caveat about Section 1.7, where the scientific aspects of Social
Science are discussed. There are three more issues that could have been discussed here: (1)
the difficulties of replicability, (2) the hazards of trying to model human behavior (collective or
individual), and (3) the near impossibility of controlled experiments, where some factors leading
to variation are controlled at fixed levels.
Chapter 2 provides the first of several interesting examples, mostly from Political Science.
This one concerns the duration of cabinets of some chosen European countries. The posterior
analysis is simple but interesting.
Examples in the later chapters include French Labor Strikes, Ancient Chinese Conflicts, and
the ongoing Afghan war, where the analysis identifies the point of time when the Taliban began
to wrest the initiative from Afghan Government and its Allies.
Chapter 7 on Bayesian Testing is somewhat weak compared with other chapters, due in part
to the lack of consensus among Bayesians about sharp nulls. Gill uses Bayes Factors but not
for sharp nulls. He tries to avoid testing sharp nulls by reporting a credibility interval for the
parameter, but then notes if the interval contains zero or not, which is a form of testing advocated
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
302 SHORT BOOK REVIEWS

by some Bayesians. He wavers between dismissing sharp nulls completely useless and helplessly
using them in bread and butter problems of variable selection in linear and generalized linear
models.
The second part has a lot of material on MCMC, some of which seems a bit too technical
and not strictly relevant for potential users of the book. But they could serve a useful purpose if
some readers are willing to supplement this material with more standard texts on MCMC like
Casella and Robert. Though the importance of Hierarchical Bayesian Analysis and MCMC is
explained at different places, the chapter on Hierarchical Bayes is good but surprisingly short.
Hopefully, it will grow in future chapters.
The book has a few typos or what appear to be typos. For example, the Jeffreys prior is
defined without a mention of a determinant and it isn’t easy to realize we are looking at a
determinant, not a matrix. The reference list is carefully compiled, it will be very useful for
a well-motivated reader. Altogether it is a very readable book, based on solid scholarship and
written with conviction, gusto, and a sense of fun.

Jayanta K. Ghosh: ghosh@stat.purdue.edu


Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

Morphometrics with R
Julien Claude with contributions from Michel Baylac, Emmanuel Paradis, and Tristan Stayton
Springer, 2008, xviii + 316 pages, € 44.95 / £ 35.99 / US$ 59.95, softcover
ISBN: 978-0-387-77789-4

Table of contents
1. Introduction 6. Statistical analysis of shape using modern
2. Acquiring and manipulating morphometric data morphometrics
3. Traditional statistics for morphometrics 7. Going further with R
4. Modern morphometrics based on configurations of Appendix A: Functions developed in this text
landmarks Appendix B: Packages used in this text
5. Statistical analysis of outlines

Readership: Beginning graduate students, and researchers and graduate students in any discipline
needing to explore and analyze morphometric data.
Chapter 1 begins with a short introduction to geometric morphometrics. A key idea is landmark,
a position that is comparable across different organisms or other objects of study. For biological
organisms, it should reflect homology arising, usually, from a similar developmental origin. The
methodologies that are described here build on the pioneering work of D’Arcy Thompson. Shapes
are mapped to a grid, with changes between objects described by mathematical transformations.
Simple changes may for example include one or more of distending, flattening and shearing. An
introduction to the R system occupies the major part of the chapter.
Chapter 2 begins with a brief section on collecting and organizing morphometric data, then
moves on to data acquisition and manipulation with the R system. The final two sections discuss
missing values and measurement error issues.
Chapter 3 is a very brief introduction to the exploratory analysis of multivariate data such as
are used in morphometrics. It ends with brief overviews of principal components analysis, linear
discriminant analysis, MANOVA, clustering, and related techniques.
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 303

Chapter 4 is an account of methods for describing and modeling shape changes, based on
the use of homologous landmarks. It starts with the truss network approach of Strauss and
Bookstein. These are somewhat akin to the truss networks that may be used in the construction
of, for example, bridges. Remaining sections describe Procrustes and other superimposition
methods, the use of thin-plate splines, distance matrix analysis, and angle-based methods.
Chapter 5 has sections on splines and other curve-fitting methods, Fourier analysis, and
eigenshape analysis.
Chapter 6 describes methods for visualizing shape change. Largely, this is a continuation
of Chapter 3. An interesting challenge is to combine morphometric data with phylogenetic
information that makes evolutionary connections.
Chapter 7 extends discussion of the use of R; there are sections on simulation, various
programming issues, and interfaces to other systems.
This is a highly useful guide to the literature and to the range of methodologies. A valuable
feature is the inclusion of R code that shows how to implement the methods that are described.
The quality of the writing and overview is, in places, uneven.

John H. Maindonald: john.maindonald@anu.edu.au


Centre for Mathematics & Its Applications
Australian National University
Canberra ACT 0200, Australia

Modern Regression Methods, Second Edition


Thomas P. Ryan
Wiley, 2009, xix + 642 pages, £ 83.50 / € 104.20 / US$ 125.00, hardcover
ISBN: 978-0-470-08186-0

Table of contents
1. Introduction 9. Logistic regression
2. Diagnostics and remedial measures 10. Nonparametric regression
3. Regression with matrix algebra 11. Robust regression
4. Introduction to multiple linear regression 12. Ridge regression
5. Plots in multiple regression 13. Nonlinear regression
6. Transformations in multiple regression 14. Experimental designs for regression
7. Selection of regressors 15. Miscellaneous topics in regression
8. Polynomial and trigonometric terms 16. Analysis of real data sets

Readership: Regression practitioners.


This update includes a new Chapter 15 with brief paragraphs on various alternative regression
methods, e.g., piecewise, semiparametric, quantile, Poisson, negative binomial, Cox, probit,
censored and truncated, Tobit, constrained, interval, random coefficient, partial least squares,
errors in variables, life data, survey sampling, Bayesian, instrumental variables, shrinkage, meta,
CART and multivariate. More exercises have been added “especially at the end of Chapter 1”.
The exercises are interesting and thought-provoking throughout. Macros now available on a
website have been updated to MINITAB 15. If you liked the first edition, you will be pleased
with this revision also.

Norman R. Draper: draper@stat.wisc.edu


Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706–1532, USA
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
304 SHORT BOOK REVIEWS

Probability Models for DNA Sequence Evolution, Second Edition


Richard Durrett
Springer, 2008, xii+431 pages, € 69.95 / £ 62.99 / US$ 84.95, hardcover
ISBN: 978-0-387-78168-6

Table of contents
1. Basic models 6. Natural selection
2. Estimation and hypothesis testing 7. Diffusion processes
3. Recombination 8. Multidimensional diffusions
4. Population complications 9. Genome rearrangement
5. Stepping stone model

Readership: All readers interested in Population Genetics in a broad sense.


This is a beautifully written book by a distinguished probabilist on what used to be called
Population Genetics, and now, more appropriately, is called DNA Sequence Evolution. This is a
second edition, but the second edition has so much new material that it’s almost a new book.
The preface provides good summary of what the book does in respect of DNA Sequence
Evolution. Chapter 1 deals with the classical Fisher-Wright model, based on statistical genetics
rather than DNA sequence, and Kimura’s hypothesis of neutral mutations. This is like the null
hypothesis of Population Genetics and Chapter 2 shows how this can be tested. Chapters 3
through 6 provide alternative models including various forms of natural selection via mutation.
Chapters 7 and 8 replace the discrete time models by one dimensional and multidimensional
(continuous time) diffusion models. Chapter 9 deals with “evolution of whole genomes”, which
is of very recent vintage, but this chapter is also “the least changed” from the first edition.
The book is very clearly and elegantly written. I enjoyed reading what I sampled, namely,
Chapter 1 and parts of Chapters 2, 7, 8, and 9. While the book is very well written and accessible
to readers with a minimal background in Biology and some knowledge of basic probability
theory and Markov processes, the material is tough for anyone who wants to read all the
proofs carefully. Applications of probability theory includes local times, Markov processes with
boundaries, Green’s function, and of course many innovative, interesting calculations.
It is strongly recommended to all readers interested in Population Genetics in a broad sense.
Moreover parts of it can be used for other courses. For example Chapter 1 can provide additional
related material to a Bayesian course that covers Dirichlet processes, and Chapter 7 could be an
interesting supplement to a course on diffusion.
No matter how good a book is, a reader will have a couple of unfulfilled wishes. Mine are
these: It would be nice to have a non-technical last chapter that summarizes the overview of those
aspects of the subject that were modeled, tested, and confirmed, and those that are in doubt. The
original goal of Fisher, and probably Wright, was to confirm Darwin’s theory of evolution and
natural selection using Mendelian statistical genetics. How is that confirmation viewed today?
Secondly, since the book abounds with beautiful theorems and proofs, a short non-technical
guide to the more important ones, as the preface covers the biological contents, would be
good.
Jayanta K. Ghosh: ghosh@stat.purdue.edu
Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 305

Cluster Randomised Trials


Richard J. Hayes, Lawrence H. Moulton
Chapman & Hall/CRC, 2009, xxii + 315 pages, £ 49.49 / US$ 80.96, hardcover
ISBN: 978-1-58488-816-1

Table of contents
Part A: Basic Concepts Part C: Analytical Methods
1. Introduction 9. Basic principles of analysis
2. Variability between clusters 10. Analysis based on cluster-level summaries
3. Choosing whether to randomise by cluster 11. Regression analysis based on individual-level data
Part B: Design Issues 12. Analysis of trials with more complex designs
4. Choice of clusters Part D: Miscellaneous Topics
5. Matching and stratification 13. Ethical considerations
6. Randomisation procedures 14. Data monitoring
7. Sample size 15. Reporting and interpretation
8. Alternative study designs

Readership: Workers in Medical Statistics, Epidemiologists.


A randomised controlled trial is the traditional gold-standard design in what has come to be
known as evidence-based medicine. The participants or units are randomly allocated to the
different conditions or treatments under study. In a cluster randomised trial (CRT) the units are
so allocated in groups. These groups, or clusters, can be geographical (look for the ‘fried egg
design’), institutional (schools, organisations, etc.), or even individual people (the units of the
cluster being teeth, for example).
The authors point out that the CRT is relatively new and that, although the topic is covered here
pretty comprehensively, it is still an active research area. It’s difficult to think of any important
issue or aspect that is not discussed here, and at length and in depth.
The book is divided into four parts: Part A discusses when and why one should use a CRT;
Part B shows how to do it; Part C describes data analyses; Part D rounds up some related issues.
In particular, Part C shows how the clustering gives rise to specialised statistics. The ‘problem’
is correlation, which results from within- and between-cluster variation: in fact, the so-called
intraclass correlation is just based on a ratio of these.
There is no heavy mathematics so the material is accessible to a wide range of readers. The
emphasis is on practical applications, reflecting in some degree the authors’ own work. Particular
attention is given to rates and proportions, such data frequently arising in epidemiology. The
statistical analyses are performed using the ‘Stata’ package, and data are available on the
publisher’s web site.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
306 SHORT BOOK REVIEWS

Stochastic Approximation: A Dynamical Systems Viewpoint


Vivek S. Borkar
Cambridge University Press, 2008, x + 164 pages, £ 35.00 / US$ 70.00, hardcover
ISBN: 978-0-521-51592-4

Table of contents
1. Introduction 6. Multiple timescales
2. Basic convergence analysis 7. Asynchronous schemes
3. Stability criteria 8. A limit theorem for fluctuations
4. Lock-in probability 9. Constant stepsize algorithms
5. Stochastic recursive inclusions 10. Applications

Target readership areas: Computational statistics, adaptive signal processing, adaptive control
engineering, communication networks, neural networks, reinforcement learning.
The author notes in his Preface that Stochastic Approximation was born in the Statistics literature
but has grown up in Electrical Engineering. However, it is by no means fully grown: its
development continues apace in a wide variety of settings – see the readership areas listed
above. The author describes his book as ‘a compact account of the highlights’ of the subject
for ‘an interested, mathematically-literate reader’. Here, mathematically-literate entails a good
working knowledge of real analysis and ordinary differential equations, together with familiarity
with probability theory up to martingales.
There are two broad approaches to the subject: statisticians (more precisely, probabilists) come
from a background in martingale convergence behaviour, whereas (mathematically-inclined)
engineers travel a route through ordinary differential equations associated with dynamical
systems. As suggested by the subtitle of the book, the latter approach is predominant here.
The first chapter gives a gentle, very readable, introduction to the essence of the subject. But
then, in chapters 2 to 9, one has to get down to work: the material is quite demanding, much
of it comprising theorems and proofs. Chapter 10 describes some applications and Chapter 11
contains some appendices on mathematical prerequisites.
There is no hiding the fact that this is a tough subject, not one for the lazy-minded. But, it’s
clearly a vital core subject, central to a variety of important applications. So, ‘an interested,
mathematically-literate reader’ looking for a rewarding area of work could do a lot worse than
study ‘this little book’, as the author modestly refers to it.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 307

Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications


Vlad Stefan Barbu, Nikolaos Limnios
Springer, 2008, xiv + 226 pages, € 46.95 / £ 42.99 / US$ 59.95, softcover
ISBN: 978-0-387-73171-1

Table of contents
1. Introduction 6. Hidden semi-Markov model and estimation
2. Discrete-time renewal processes A. Lemmas for semi-Markov chains
3. Semi-Markov chains B. Lemmas for hidden semi-Markov chains
4. Nonparametric estimation for semi-Markov chains C. Some proofs
5. Reliability theory for discrete-time semi-Markov D. Markov chains
systems E. Miscellaneous

Readership: Applied probabilists, theoretically-oriented statisticians, research workers and


students.
Semi-Markov Processes (or Markov Renewal Processes), combining Markov Processes and
Renewal Theory, were introduced in the mid-1950s. There followed much research, mainly in a
continuous-time framework. This book focuses on the less well-developed discrete-time version,
a hybrid of Markov Chains and Recurrent Events (as discrete-time renewal is called in Feller
Volume I). In the last chapter a further layer is added in which the semi-Markov chain is hidden
(unobserved) but gives rise to a process, via a probability law, that is observed.
There are six chapters: in the first the scene is set (notation, definitions, etc.) and an overview
of the subsequent chapters is given; there follows discrete-time renewal theory (Chapter 2) and
semi-Markov chains (Chapter 3); Chapter 4 addresses non-parametric estimation, essentially
estimates based on proportions of observed events, Chapter 5 covers reliability aspects of
such systems, and Chapter 6 introduces the hidden version and its non-parametric maximum
likelihood estimation. There are also some appendices covering certain technical results.
Overall, this is a stimulating book. As it says, the discrete-time framework has been underused
in the past in view of the fact that much real data in practice (I would even say most) is recorded
in discrete time. Also, the applications listed, particularly the DNA sequencing, are important
and timely.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
308 SHORT BOOK REVIEWS

Bayesian Evaluation of Informative Hypotheses


Herbert Hoijtink, Irene Klugkist, Paul A. Boelen (Editors)
Springer, 2008, xii + 361 pages, € 59.95 / £ 53.99 / US$ 79.95, hardcover
ISBN: 978-0-387-09611-7

Table of contents

1. An introduction to Bayesian evaluation of 8. The Bayes factor versus other model selection
informative hypotheses (Herbert Hoijtink, Irene criteria for the selection of constrained models
Klugkist, Paul A. Boelen) (Ming-Hui Chen, Sungduk Kim)
Part I. Bayesian Evaluation of Informative Hypotheses 9. Bayesian versus frequentist inference (Eric-Jan
2. Illustrative psychological data and hypotheses for Wagenmakers, Michael Lee, Tom Lodewyckx,
Bayesian inequality constrained analysis of Geoffrey J. Iverson)
variance (Paul A. Boelen, Herbert Hoijtink) Part III. Beyond Analysis of Variance
3. Bayesian estimation for inequality constrained 10. Inequality constrained analysis of covariance
analysis of variance (Irene Klugkist, Joris Mulder) (Irene Klugkist, Floryt van Wesel, Sonja van Well,
4. Encompassing prior based model selection for Annemarie Kolk)
inequality constrained analysis of variance (Irene 11. Inequality constrained latent class models
Klugkist) (Herbert Hoijtink, Jan Boom)
5. An evaluation of Bayesian inequality constrained 12. Inequality constrained contingency table analysis
analysis of variance (Herbert Hoijtink, Rafaele (Olav Laudy)
Huntjens, Albert Reijntjes, Rebecca Kuiper, Paul 13. Inequality constrained multilevel models (Bernet
A. Boelen) Sekasanvu Kato, Carl F.W. Peeters)
Part II. A Further Study of Prior Distributions and Part IV. Evaluations
the Bayes Factor 14. A psychologist’s view on Bayesian evaluation of
6. Bayes factors based on test statistics under order informative hypotheses (Marleen Rijkeboer,
restrections (David Rossell, Veerabhadran Marcel van der Hout)
Baladandayuthapani, Valen E. Johnson) 15. A statistician’s view on Bayesian evaluation of
7. Objective Bayes factors for informative informative hypotheses (Jay I. Myung, George
hypotheses: “Completing” the informative Karabatsos, Geoffrey J. Iverson)
hypothesis and “splitting” the Bayes factors (Luis 16. A philosopher’s view on Bayesian evaluation of
Raúl Pericchi Guerra, Guimei Liu, David Torres informative hypotheses (Jan-Willem Romeijn,
Núñez) Rens van de Schoot)

Readership: Psychologists and Bayesian statisticians.


Informative hypotheses are best illustrated by typical examples, e.g.,
H1 : μ1 > μ2 > μ3
or
H2 : {μ1 ≈ μ2 } > μ3 .
The first statement is clear, the second asserts that μ1 and μ2 are nearly equal and both are greater
than μ3 . In applications μ’s are typically means but could be other parameters of interest. These
hypotheses are being used by psychologists to have more meaningful hypotheses than the usual
versions based on a somewhat artificial null hypothesis and its negation. In that approach each
of the above two hypotheses would lead to more than one null and alternative.
Along with this new formulation, the authors of articles in this volume rely on the Bayesian
paradigm both because of its logical and philosophical attractions and the ease with which such
inequality constrained “informative” hypotheses can be tested in this paradigm.
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 309

This book has been very imaginatively planned and anchored around analysis of challenging
problems of (Clinical) Psychology. Not only the analyses but the design of experiments and the
variables measured are very interesting.
The first problem, commonly known as Multiple Personality Disorder, is that of a person who
claims to have two or more separate identities, each of which forgets what the other does. Is
this real or simulated? In addition to the patients, there are three control groups, namely, true
controls, simulators of amnesia, and true amnesiacs. The hypothesis of interest is H 1 : μcon >
{μamn = μpat } > μsim , where μ = true mean of a memory score. H 1 implies the disorder is
genuine. The other hypothesis H 2 is that μcon > μamn > {μpat = μsim }, i.e., the disorder is
simulated by the patients.
The second problem concerns the effect of “peer evaluation on mood in children high and
low in depression.” The third is about “coping with loss: the influence of gender, kinship and
the time from loss.”
The problems of inequality constrained hypotheses are studied through Bayes factors. There
is also a study of robustness of the conclusions with respect to choice of priors. Bayesian readers
will recognize at least two well known Bayesian analysts among the authors.
A final part, namely Part IV of the book, sums up and evaluates these analyses by psychologists
and statisticians. There are three evaluations in this part, one by a psychologist, one by a
statistician and one by a philosopher.
This is an excellent book for psychologists and Bayesian statisticians. Strongly recommended
for both categories of readers.

Jayanta K. Ghosh: ghosh@stat.purdue.edu


Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

Who Gave You the Epsilon? & Other Tales of Mathematical History
Marlow Anderson, Victor Katz, Robin Wilson (Editors)
The Mathematical Association of America, 2009, x + 431 pages, £ 45.00 / US$ 65.50, hardcover
ISBN: 978-0-88385-569-0

Table of contents
Analysis (10 articles) Algebra and Number Theory (16 articles)
Geometry, Topology and Foundations (11 articles) Surveys (4 Articles)

Readership: All interested in the broad development and history of mathematics.


This wonderful book, containing 41 articles, is a sequel to Sherlock Holmes in Babylon. Both
books reprint older, high quality articles on the history of mathematics; this volume has topics
from the 19th and 20th centuries. It is an absolutely fascinating volume. Do you want to sing
a song about the history of group theory? Of course you do! See pages 269–270 for words
and music! How good was Ramanujan, the Indian intuitive mathematician? Let G. H. Hardy
tell you on pages 337–348. I quote a famous story from p. 344: “I remember going to see
him [Ramanujan] once when he was lying ill in Putney. I had ridden in taxi cab No. 1729 and
remarked that the number seemed to me rather a dull one and I hoped it was not an unfavorable
omen. ‘No,’ he replied, ‘it is a very interesting number; it is the smallest number expressible
as a sum of two cubes in two different ways’ (1729 = 123 + 13 = 103 + 93 ).” Did you know
that epsilon comes from erreur (error)? Read “Cauchy and the origins of rigorous calculus” on
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
310 SHORT BOOK REVIEWS

pages 5–13. Did you know (p. 58) that the late well-known statistician I. J. (Jack) Good was Alan
Turing’s chief statistical assistant at the U.K. code-breakers’ Bletchly Park in 1942? Well known
chess masters C. H. O’D Alexander, P. S. Milner-Barry and Harry Golombek were also there
(see p. 58). I could go on, but you get the idea! This would be a great gift for any statistician or
mathematician.
Each of the four main sections has a brief foreword and afterword written by the editors.

Norman R. Draper: draper@stat.wisc.edu


Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706–1532, USA

An Introduction to Multilevel Modeling Techniques, Second Edition


Ronald H. Heck, Scott L. Thomas
Routledge, 2008, xi + 268 pages, £ 25.95 / US$ 49.95, softcover (also available in hardcover)
ISBN: 978-1-84169-756-7

Table of contents

Introduction 4. Defining multilevel latent variables


1. Investigating organizational structures, processes, 5. Multilevel structural equation models
and outcomes 6. Multilevel longitudinal analysis
2. Development of multilevel modeling techniques 7. Multilevel models with categorical variables
3. Multilevel regression models

Readership: Graduate or advanced undergraduate students as well as researchers for example in


the organizational, educational, behavioral, and social sciences.
This book presents an applied approach to the use of multilevel modeling in exploring various
types of hierarchical data structures. A basic knowledge of data analysis and univariate statistics
is assumed.
Since the first edition (2000) of the book, there have been lots of advances both in the theory
and the software. The book covers the newest and most important achievements, but at the same
time gives a nice summary of the historical development of multilevel models. It also addresses
some issues for further consideration.
The first four chapters challenge the reader to think about the complexity of multilevel data
structures compared to the usual or traditional single-level data structures and their analysis.
Two software programs, HLM and Mplus, are briefly introduced.
The last four chapters present more complex multilevel modeling schemes, including structural
equation models using latent variables, analyzing change processes via longitudinal analysis
and utilizing categorical variables e.g. by mixture models. All these approaches are shown to
have much in common, although their paths of development have varied considerably. In this
book, the approaches are brought under a unified framework for exploring hierarchical data
structures.
The examples worked through using the software mentioned above and with real data are
essential and help to understand the points made in the text. The text itself is very readable and
the level of the mathematical detail is not high. Unfortunately the software programs seem a
bit clumsy and old-fashioned (both input and output). It may be understood as a current state
of the scientific development, and also as a historical tradition, but it would be important to
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 311

implement these prominent models and methods in general statistical software packages, such
as R, in order to make them more accessible and more widely used in the future.
Kimmo Vehkalahti: kimmo.vehkalahti@helsinki.fi
Department of Mathematics and Statistics
FI-00014 University of Helsinki, Finland

Generalized, Linear, and Mixed Models, Second Edition


Charles E. McCulloch, Shayle R. Searle, John M. Neuhaus
Wiley, 2008, xxv + 384 pages, £ 95.95 / € 119.60 / US$ 143.50, hardcover
ISBN: 978-0-470-07371-1

Table of contents
1. Introduction 9. Marginal models
2. One-way classifications 10. Multivariate models
3. Single-predictor regression 11. Nonlinear models
4. Linear models (LMs) 12. Departures from assumptions
5. Generalized linear models (GLMs) 13. Prediction
6. Linear mixed models (LMMs) 14. Computing
7. Generalized linear mixed models Appendix M: Some matrix results
8. Models for longitudinal data Appendix S: Some statistical results

Readership: Statistics students at the upper-undergraduate and beginning-graduate levels,


applied statisticians, industrial practitioners, and researchers.
This is the Second Edition of the book by Charles E. McCulloch and Shayle R. Searle, published
in 2001. “This text is to be highly recommended as one that provides a modern perspective on
fitting models to data.” – That was the first sentence of Philip Prescott’s review on the First
Edition on these ISR Short Book Reviews in 2001. It is a pleasure to agree with this comment
and welcome the new revised edition, with the third coauthor John M. Neuhaus.
The very readable style and peaceful and patient explanation of the details and leading
principles, is nicely present in this book. The Second Edition has three new chapters and
numerous new and updated examples, and in addition the rest of the book has been lightly
revised. Citing the Preface: “As before, the emphasis is on the applications of these models and
the assumptions necessary for valid inference. The focus is not on the details of data analysis nor
the use of statistical software, though we do briefly mention some examples.” All in all, being a
great Shayle Searle book fan, I am happy to keep this book near my desk and recommend it to
my students.
Simo Puntanen: simo.puntanen@uta.fi
Department of Mathematics and Statistics
FI-33014 University of Tampere, Finland

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
312 SHORT BOOK REVIEWS

Biostatistics and Microbiology: A Survival Manual


Daryl S. Paulson
Springer, 2009, x + 216 pages, € 54.95 / £ 49.99 / US$ 69.95, softcover
ISBN: 978-0-387-77281-3

Table of contents
1. Biostatistics and microbiology: Introduction 5. Regression and correlation analysis
2. One-sample tests 6. Qualitative data analysis
3. Two sample statistical tests, normal distribution 7. Nonparametric statistical methods
4. Analysis of variance Appendix: Tables of mathematical values

Readership: Microbiologists who wish to carry out a statistical analysis of their own data.
This is a well-written elementary introduction to common statistical tests, estimates, and
confidence intervals, illustrated with data from microbiology. Also included are testing problems
involving bioequivalence, for example that a new drug is as effective as an old one. This, being a
claim by a drug company or a microbiologist, is treated as the alternative and the null hypothesis
is its negation. The discussion of this difficult problem as well as its one sided versions is also
both simple and illuminating.
Each problem and the suggested inference procedure are introduced with a brief but clear
discussion. This is followed by an algorithmic description of the inference procedure. The
strength of the book lies in its simple but clear presentation, its examples from microbiology,
and the insight of the author gained from working with scientists. For example, if a pilot
experiment for determining the optimal sample size leads to an unacceptably large sample size,
Paulson advises the scientist to redo the pilot experiment much more carefully. Most likely, the
pilot experiment was unreliable.

Jayanta K. Ghosh: ghosh@stat.purdue.edu


Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

The Statistical Analysis of Functional MRI Data


Nicole A. Lazar
Springer, 2008, xiv + 299 pages, € 59.95 / £ 53.99 / US$ 84.95, hardcover
ISBN: 978-0-387-78190-7

Table of contents
1. The science of fMRI 9. Bayesian methods in fMRI
2. Design of fMRI experiments 10. Multiple testing in fMRI: The problem of “thresholding”
3. Noise and data preprocessing 11. Additional statistical issues
4. Statistical issues in fMRI data analysis 12. Case study: Eye motion data
5. Basic statistical analysis A. Survey of major fMRI software packages
6. Temporal, spatial, spatiotemporal models B. Glossary of fMRI terms
7. Multivariate approaches
8. Basis function approaches

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 313

Readership: Statisticians working or wishing to work on statistical problems in the area


of neuroscience, using fMRI data on the brain. Also suitable for cognitive psychologists
and neuroscientists and other readers familiar with graduate level statistics. This is the first
comprehensive book dealing with statistical analysis of fMRI image data on the brain.
This excellent interdisciplinary text begins with the basic physics behind neuroimaging through
magnetic resonance (Chapter 1) and then goes on to discuss the technology involved in collecting
the image data (Chapter 2). The images that we see are obtained by inverse Fourier transforms on
raw data. The complex procedure for generating data makes it necessary to preprocess it before
statistical analysis can begin (Chapter 3). A statistician who wishes to participate in designing
an experiment, not just analyze the data, needs to know some of these basic facts.
It appears that experiments are basically of two kinds. One may observe a subject or subjects
before and after a stimulus is given, and try to identify which areas of the brain are activated.
Alternatively, one may try to estimate the response function of the brain to an event like flashing
a checkerboard. Chapters 4 through 11 deal with basic issues as well as standard methods
like linear models, spatiotemporal models, wavelets, Bayesian methods, and multiple testing,
including the topological and geometric methods of Keith Worsley, whose tragic death cut short
a brilliant career.
The final chapter discusses data from an interesting experiment on eye motion, which is
relevant in the study of schizophrenia, brain lesions, etc., which seem to lead to loss of inhibition
or poor control of unusual eye movements.
Jayanta K. Ghosh: ghosh@stat.purdue.edu
Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

Encyclopedia of Quantitative Risk Analysis and Assessment


Edward L. Melnick, Brian S. Everitt (Editors-in-Chief)
Wiley, 2008, 4 vols., 2176 pages, £ 725.00 / € 978.80 / US$ 1450.00, hardcover
ISBN: 978-0-470-03549-8

Readership: Anyone who needs an authoritative explanation of aspects of risk.


Risk is defined in different ways by different people. To some it means the probability of an
adverse event. To others the expected cost of such an event. But however it is defined, risk is
universal. An encyclopedia of risk must therefore cover material ranging from technical tools
for risk management and modelling, to material describing risk in different domains, and this
breadth means that almost certainly some topics will have been omitted.
In view of this, it might also be regarded as unfair to pick particular topics which have been
omitted or skipped over – simply on the grounds that in such a broad area there will always
be such topics. Having said that, I have a particular interest in risk modelling in the personal
banking sector, and was disappointed by the limited coverage of risk in this area – especially
since it is so topical at present. At first I thought there was nothing on the topic at all, but
then I found one article on retail credit scoring, under the far-from-obvious heading of Risk in
credit granting and lending decisions: credit scoring. Perhaps the editors could include further
discussion of this area in a future edition.
My nit-picking aside, it is clear that the editors of this four volume work have made a valiant
effort to cover as much as possible, and have produced a valuable reference work. It is also clear
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
314 SHORT BOOK REVIEWS

that a massive amount of effort went into it – in addition to the two editors-in-chief, it is the
product of the work of eleven section editors and nearly 360 authors. The publishers, too, are to
be commended: it is a rather beautiful work.

David J. Hand: d.j.hand@imperial.ac.uk


Mathematics Department, Imperial College
London SW7 2AZ, UK

Statistical DNA Forensics: Theory, Methods and Computation


Wing Kam Fung, Yue-Qing Hu
Wiley, 2008, xxii + 241 pages, £ 55.00 / € 66.00 / US$ 110.00, hardcover
ISBN: 978-0-470-06636-2

Table of contents
1. Introduction 7. Interpreting mixtures in the presence of relatives
2. Probability and statistics 8. Other issues
3. Population genetics Solutions to problems
4. Parentage testing Appendix A: The standard normal distribution
5. Testing for kinship Appendix B: Upper 1% and 5% points of χ 2 distributions
6. Interpreting mixtures

Readership: Statisticians and others who wish to learn the principles of DNA matching; advanced
students, researchers.
This is a volume in the Wiley Statistics in Practice series. It aims to introduce the key ideas of
probability and statistics which are necessary for the evaluation of DNA evidence.
No background is assumed in probability or statistics – an introductory chapter covers these –
although if this was one’s first exposure to such ideas I think the rest of the book would be hard
going. This chapter begins by introducing the basic laws of probability, and works up through
relevant statistical distributions to likelihood ratios, estimation, and testing, all in 18 pages. This
is followed by another introductory chapter on population genetic models, which allow one
to determine the probability of observing given profiles in a specified population. Subsequent
chapters focus on the three main applications of DNA profiling: identity testing, determination
of parentage and kinship, and interpretation of mixed DNA stains.
Computer programs are available on an associated website, and the text is illustrated using this
software, including screenshots from it, where appropriate. Mathematical proofs are included
where necessary, but these are relegated to appendices at the end of chapters, so as not to disrupt
the flow of the substantive ideas. Collections of problems are given at the end of each chapter,
and solutions are given at the end of the book.
Approaching this from the perspective of a statistician who would like to learn more about
DNA profiling, I would have appreciated more detail about the background technology: the three
paragraph description in Chapter 1 proved inadequate for me to understand the basic principles,
and I had to resort to the web for a more detailed explanation. Having said that, once I had found
such an explanation, the remainder of the book was very clear. Overall, it provides an excellent
introduction to the statistics of DNA matching. I would certainly recommend it to anyone who

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 315

wishes to gain an in-depth understanding of the statistical ideals and tools which underlie the
technology. It would make a great text for a course in this topic.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

Forecasting with Exponential Smoothing: The State Space Approach


Rob J. Hyndman, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder
Springer, 2008, xiv + 362 pages, £ 33.99 / € 36.95 / US$ 54.95, softcover
ISBN: 978-3-540-71916-8

Table of contents
Part I. Introduction 11. Reduced forms and relationships with ARIMA
1. Basic concepts models
2. Getting started 12. Linear innovations state space models with
Part II. Essentials random seed states
3. Linear innovations state space models 13. Conventional state space models
4. Non-linear and heteroscedastic innovations 14. Time series with multiple seasonal patterns
state space models 15. Non-linear models for positive data
5. Estimation of innovations state space models 16. Models for count data
6. Prediction distributions and intervals 17. Vector exponential smoothing
7. Selection of models Part IV. Applications
Part III. Further Topics 18. Inventory control application
8. Normalizing seasonal components 19. Conditional heteroscedasticity and applications
9. Models with regressor variables in finance
10. Some properties of linear models 20. Economic applications: the Beveridge-Nelson
decomposition

Readership: People wanting to apply exponential smoothing methods in their own area of
interest, as well as for researchers wanting to take the ideas in new directions.
This book seeks to provide a comprehensive discussion of exponential smoothing forecasting
methods from the innovations state-space perspective. The authors integrate the state-space
approach to exponential smoothing forecasting into a coherent whole – and have done an
excellent job. This is certainly a book I would recommend to anyone who wishes to obtain a
sound grasp of the area, or to a PhD student about to begin research in the area.
Exponential forecasting is an intuitively attractive and very widely used approach, with a
long history. The state space framework is natural structure for developing prediction intervals,
maximum likelihood estimates, and procedures for model selection. The description here is
accessible, and at mathematical level which is just right to explain the ideas and methods from
a practical perspective, without labouring mathematical niceties.
I particularly appreciated the format of the outline of the book in the preface, which attempts
to cater for a wide readership, and which spells out clearly what parts could be read ‘if you
only want a snack’, or for ‘readers wanting a more substantial meal’, or those who ‘want the
full banquet’. Three concluding chapters then provide ‘after-dinner cocktails’ of applications to
inventory control, economics, and finance. Each chapter concludes with exercises
There is an associated website containing data sets, computer code, and additional exercises.
Details of public domain R software for the methods described in the book are given.
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
316 SHORT BOOK REVIEWS

In summary, this is a perfect introduction to the subject, and an ideal text for an advanced
undergraduate or beginning postgraduate course in the topic.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

Observed Confidence Levels: Theory and Application


Alan M. Polansky
Chapman & Hall/CRC, 2008, xvi + 271 pages, £ 53.99 / US$ 83.95, hardcover
ISBN: 978-1-58488-802-4

Table of contents
1. Introduction 5. Nonparametric smoothing problems
2. Single parameter problems 6. Further applications
3. Multiple parameter problems 7. Connections and comparisons
4. Linear models and regression Appendix: Review of asymptotic statistics

Readership: Practicing statisticians.


Given a set of regions of a parameter space, observed confidence levels tell us how confident can
we be that the parameter lies in each of the regions. For example, the book’s opening illustration
describes how a hypothesis of bioequivalence of treatments can be examined by estimating the
observed confidence that the parameter lies in a specified interval about the value corresponding
to no clinically important difference between treatments.
The basic theory of the single parameter case and the multi-parameter case are described in
separate chapters, because of the significant extra complications in the latter. Later chapters
then look at the special cases of linear models and nonparametric smoothing methods. The
book shows how the ideas are related to other statistical concepts, including hypothesis testing,
multiple comparisons, attained confidence levels, and Bayesian approaches. It includes many
examples, from a wide range of different application domains, based on real data. There are
exercises at the end of each chapter, apart from the first, and the R code the author used is
available from his website.
Techniques such as this touch on areas which have stimulated considerable discussion, even
controversy, in the past. The author says ‘I am willing to let the examples presented in this book
stand on their own and let the readers decide for themselves what their final opinion is of these
methods.’ The breadth of real examples that the author provides certainly demonstrates that this
is a class of techniques worth considering.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 317

A First Course in Statistical Programming with R


W. John Braun, Duncan J. Murdoch
Cambridge University Press, 2008, x + 163 pages, £ 24.99 / US$ 50.00, softcover
ISBN: 978-0-521-69424-7

Table of contents
1. Getting started 5. Simulation
2. Introduction to the R language 6. Computational linear algebra
3. Programming statistical graphics 7. Numerical optimization
4. Programming with R Appendix: Review of random variables and distributions

Readership: Everyone interested in learning statistical programming.


Programming skills are becoming increasingly important in the repertoire of a statistician.
For example, in order to apply new methods not yet available in standard software or for
simulation studies. This book is a solid introduction to (statistical) programming and provides
the reader with all programming tools needed to approach typical problems. The programming
language the authors suggest for this purpose is R and they assume no prior knowledge of it.
The second chapter hence is a general introduction to R which is, however, still worthwhile
reading for readers already familiar with R because, for example, the chapter also explains
what floating point numbers are. The title of the third chapter is a bit misleading because it
actually does not teach how to program code that produces graphs but shows how to make use of
inbuilt R functions to obtain graphs. The actual material about programming is contained in the
chapters 4–7. Many important and relevant concepts of statistical programming as, for instance,
flow control, random number generation, numerical algebra, and optimization are explained
there. Especially in chapter 4 the authors use interesting problems to illustrate the concepts
in question, but in the subsequent chapters such a motivation is often missing. In my opinion
especially the chapters 6 and 7 might gain a lot by putting the concepts discussed there more
into a statistical framework and applying them to concrete statistical problems.
Overall, the book is a concise and easy to read introduction to programming in R; however,
if used in a course on statistical programming maybe the connection of some of the concepts to
statistical problems might need some more elaboration.

Klaus Nordhausen: klaus.nordhausen@uta.fi


Tampere School of Public Health
FI-33014 University of Tampere, Finland

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
318 SHORT BOOK REVIEWS

Introduction to Empirical Processes and Semiparametric Inference


Michael R. Kosorok
Springer, 2008, xiv + 483 pages, € 69.95 / £ 62.99 / US$ 89.95, hardcover
ISBN: 978-0-387-74977-8

Table of contents
Part I. Overview 12. The functional delta method
1. Introduction 13. Z-estimators
2. An overview of empirical processes 14. M-estimators
3. Overview of semiparametric inference 15. Case studies II
4. Case studies I Part III. Semiparametric Inference
Part II. Empirical Processes 16. Introduction to semiparametric inference
5. Introduction to empirical processes 17. Preliminaries for semiparametric inference
6. Preliminaries for empirical processes 18. Semiparametric models and efficiency
7. Stochastic convergence 19. Efficient inference for finite-dimensional parameters
8. Empirical process methods 20. Efficient inference for infinite-dimensional parameters
9. Entropy calculation 21. Semiparametric M-estimation
10. Bootstrapping empirical processes 22. Case studies III
11. Additional empirical process results

Readership: Researchers who need to develop inferential tools for relatively complicated
mathematical or statistical modeling problems, statisticians and biostatisticians, advanced
students.
The main focus of this book is to introduce empirical processes and semiparametric inference
methods to researchers interested in developing inferential tools for relatively complicated
mathematical or statistical modeling problems. The material is mainly self-contained. Therefore
those with moderate knowledge of probability and mathematical statistics may become
acquainted with the areas with the aid of the book. The material is divided into three parts.
The first part consists of an overview of the topics avoiding basic mathematical definitions and
proofs, which are introduced in later parts. The second part introduces foundations of empirical
processes and the third part introduces foundations of semiparametric inference. Each part is
followed by a set of case studies of statistical or biostatistical modeling. The material is structured
in sensible way supporting the learning and understanding of useful and challenging techniques
of empirical processes and semiparametric inference. The book could well be very helpful for
those studying and applying these techniques.

Tapio Nummi: tapio.nummi@uta.fi


Tampere School of Public Health
FI-33014 University of Tampere, Finland

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 319

Medical Biostatistics, Second Edition


Abhaya Indrayan
Chapman & Hall/CRC, 2008, 824 pages, £ 55.99 / US$ 99.95, hardcover
ISBN: 978-1-58488-887-1

Table of contents
1. Medical uncertainties 13. Inference from proportions
2. Basics of medical studies 14. Relative risk and odds ratio
3. Sampling methods 15. Inference from means
4. Designs of observational studies 16. Relationships: Quantitative data
5. Medical experiments 17. Relationships: Qualitative dependent
6. Clinical trials 18. Survival analysis
7. Numerical methods for representing variation 19. Simultaneous consideration of several variables
8. Presentation of variation by figures 20. Quality considerations
9. Some qualitative aspects of medicine 21. Statistical fallacies
10. Clinimetrics and evidence-based medicine Appendix 1: Statistical software
11. Measurement of community health Appendix 2: Some statistical tables
12. Confidence intervals, principles of tests of
significance, and sample size

Readership: Medical and health professionals, advanced and basic level students of health and
medicine, biostatisticians.
The book contains a fairly comprehensive introduction of basic statistical concepts and methods
used in medical and health sciences. The basic idea is to give clear verbal explanations with good
examples avoiding heavy mathematical definitions or formulations. My opinion is that the author
has succeeded very well in this challenging task. By the aid of the book it should be relatively
easy to get the main ideas, and if deeper understanding is needed a list of further references is
provided to each topic. As a marginal weak point I can mention that a very commonly adopted
free statistical software package R is not mentioned on the list of general purpose statistical
software. However, I can easily recommend this book for persons working on the area of medical
biostatistics.

Tapio Nummi: tapio.nummi@uta.fi


Tampere School of Public Health
FI-33014 University of Tampere, Finland

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
320 SHORT BOOK REVIEWS

Clinical Prediction Models: A Practical Approach to Development, Validation,


and Updating
Ewout W. Steyerberg
Springer, 2009, xxviii + 497 pages, £ 53.99 / € 59.95 / US$ 89.95, hardcover
ISBN: 978-0-387-77243-1

Table of contents

1. Introduction 14. Estimation with external information


Part I. Prediction Models in Medicine 15. Evaluation of performance
2. Applications of prediction models 16. Clinical usefulness
3. Study design for prediction models 17. Validation of prediction models
4. Statistical models for prediction 18. Presentation formats
5. Overfitting and optimism in prediction models Part III. Generalizability of Prediction Models
6. Choosing between alternative statistical models 19. Patterns of external validity
Part II. Developing Valid Prediction Models 20. Updating for a new setting
7. Dealing with missing values 21. Updating for multiple settings
8. Case study on dealing with missing values Part IV. Applications
9. Coding of categorical and continuous predictors 22. Prediction of a binary outcome: 30-day
10. Restrictions on candidate predictors mortality after acute myocardial infarction
11. Selection of main effects 23. Case study on survival analysis: Prediction
12. Assumptions in regression models: Additivity of secondary cardiovascular events
and linearity 24. Lessons from case studies
13. Modern estimation methods

Readership: This book is suitable for those with a basic knowledge of biostatistics and statistical
modeling. The intended audience includes epidemiologists and applied biostatisticians looking
for a practical guide to developing and testing clinical prediction models, or health care
professionals and health policy makers interested in critically appraising a clinical prediction
model.
Clinical Prediction Models is an excellent practical guide for developing, assessing and updating
clinical models both for disease prognosis and diagnosis. The book’s clinical focus in this era
of evidence-based medicine is refreshing and serves as a much-needed addition to statistical
modelling of clinical data. The book assumes a basic familiarity with modelling using generalized
linear models, focussing instead on the real challenges facing applied biostatisticians and
epidemiologists wanting to create useful models: dealing with a plethora of model choices,
small sample sizes, many candidate predictors and missing data. This is an example-based book
illuminating the vagaries of clinical data and offering sound practical advice on data exploration,
model selection and data presentation. Model selection is at the core of the text with in-depth
discussion of choices of candidate predictors, pre-specified models, models with interactions,
stepwise selection methods in linear models, as well as modelling using generalized additive
models (GAM), fractional polynomials and restricted cubic splines. There are also a few pages
devoted to more modern selection methods such as Bayesian model averaging (BMA). There
is an excellent discussion of estimation bias, over-fitting and optimism in prediction models
motivating the use of methods to correct for overestimation of model coefficients. Uniform
shrinkage methods, penalized maximum likelihood methods, and least absolute shrinkage and
selection operator (LASSO) shrinkage for selection are discussed in some detail.
The author considers many interesting examples of clinical data throughout the text, using
data from rich data sources like the GUSTO-1 and the SMART studies. These data sets are made
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 321

available on the book’s website (http://www.clinicalpredictionmodels.org) for the purposes of


promoting practical experience with modelling.
The author uses simple simulations using a few reproducible R commands to motivate the use
of imputation methods and shrinkage. These simple but illuminating illustrations are one of the
highlights of the book and serve as excellent pedagogical tools for motivating good statistical
thinking.
There is some mention of statistical software available to try out the newer estimation methods.
The author shows partiality to R software and provides some R code in the book and makes
full programs available of the website. This may be an impediment to some readers wedded to
menu-driven packages.

Teresa Neeman: teresa.neeman@anu.edu.au


Statistical Consulting Unit, Australian National University
Canberra ACT 0200, Australia

Analysis of Messy Data Volume 1: Designed Experiments, Second Edition


George A. Milliken, Dallas E. Johnson
Chapman & Hall/CRC, 2009, xiv + 674 pages, £ 54.99 / US$ 89.95, hardcover
ISBN: 978-1-58488-334-0

Table of contents
1. The simplest case: one-way treatment structure in 14. Using the effects model to analyze two-way
a completely randomized design structure with treatment structures with missing treatment
homogeneous errors combinations
2. One-way treatment structure in a completely 15. Case study: two-way treatment structure with
randomized design structure with heterogeneous missing treatment combinations
errors 16. Analyzing three-way and higher-order treatment
3. Simultaneous inference procedures and multiple structures
comparisons 17. Case study: three-way treatment structure with
4. Basics for designing experiments many missing treatment combinations
5. Multi-level designs: split-plots, strip-plots, 18. Random effects models and variance components
repeated measures, and combinations 19. Methods for estimating variance components
6. Matrix form of the model 20. Methods for making inferences about variance
7. Balanced two-way treatment structures components
8. Case study: complete analyses of balanced 21. Case study: analysis of a random effects model
two-way experiments 22. Analysis of mixed models
9. Using the means model to analyze balanced two- 23. Case studies of a mixed model
way treatment structures with unequal subclass 24. Methods for analyzing split-plot type designs
numbers 25. Methods for analyzing strip-plot type designs
10. Using the effects model to analyze balanced two- 26. Methods for analyzing repeated measures
way treatment structures with unequal subclass experiments
numbers 27. Analysis of repeated measures experiments when
11. Analyzing large balanced two-way experiments the ideal conditions are not satisfied
having unequal subclass numbers 28. Case studies: complex examples having repeated
12. Case study: balanced two-way treatment structure measures
with unequal subclass numbers 29. Analysis of crossover designs
13. Using the means model to analyze two-way 30. Analysis of nested designs
treatment structures with missing treatment
combinations

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
322 SHORT BOOK REVIEWS

Readership: Experimenters and statisticians involved with classic plot experiments and their
analyses.
The 1984 first edition had 473 pages (32 chapters) compared with the second edition’s 674 (30),
and many of the chapter headings and many segments of text are the same or similar in both
editions. This might give a first impression that the new revision is perhaps not worth investing
in. However the exact opposite is true! New to the second edition are “modern suggestions
for multiple comparisons . . . additional examples of split plot and repeated measures designs
. . . and the use of SAS-GLM, . . . , SAS-MIXED and JMP” in various analyses. Every chapter
has been systematically re-written for greater clarity, and added explanatory material has been
inserted throughout. Many new diagrams and redrawn diagrams have been provided; those that
show how to lay out the experimental designs are just superb and extraordinarily clear. The
reference list has increased from 44 to 99. This revision is highly recommended to those who
plan and analyze experiments of the type described.

Norman R. Draper: draper@stat.wisc.edu


Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706-1532, USA

Statistical Misconceptions
Schuyler W. Huck
Psychology Press, 2009, xx + 288 pages, £ 19.95 / US$ 32.95, softcover (also available in
hardcover)
ISBN: 978-0-8058-5904-1

Table of contents
1. Descriptive statistics 9. t-Tests involving one or two means
2. Distributional shape 10. ANOVA and ANCOVA
3. Bivariate correlation 11. Practical significance, power, and effect size
4. Reliability and validity 12. Regression
5. Probability Appendix A: Citations for material referenced in the preface
6. Sampling Appendix B: References for quotations presented in the sections
7. Estimation entitled “Evidence that this misconception exists”
8. Hypothesis testing

Readership: Students and teachers of introductory statistics; statisticians wishing to communi-


cate statistical theory unambiguously.
This book with its royal purple cover is dedicated (p. v) to “three groups of individuals: those
who have overcome one or more of their statistical misconceptions, those who will . . . follow
in the footsteps of the first group, and those whose life’s work includes helping others to cast
aside false beliefs about statistics.” Those who have overcome statistical misconceptions should
be pleased to see the road they travelled laid out in clear terms. Those who will have to face
these misconceptions in the future should also find this book enlightening. I think the last group
are most strongly advised to buy this book, as it will encourage them to think about what they
say in their classes. Those who rely on a firm rule-based approach to statistics will be the most
disappointed, as most of the rules they have developed will be overturned.

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 323

The book is very highly structured, which some readers may find helpful and others may
find irritating after a while. Each chapter consists of three to six misconceptions. For each
misconception, there are five parts. Firstly, the misconception is stated in one or two sentences.
Then, evidence is given that the misconception exists. This is in the form of quotes from journals,
textbooks and websites. All the references are given in an appendix: I hope you do not find your
own work referred to there!
The third part is a section explaining why the misconception is dangerous. These sections are
short yet they manage to explain the dangers in an accessible manner to someone who has studied
an introductory statistics course. Next, the misconception is undone. Sometimes this involves
a graph or picture, sometimes it is simply a page or two of text describing helping readers
to confront and then dispel the misconception. Finally, an internet assignment is given. This
invariably points to the book’s website, http://www.psypress.com/statistical-misconceptions/.
These worked easily when I tested a handful of them, and some connected to applets produced
by well-known statistics educators such as Alan Rossman and Beth Chance. Some sections also
include a list of recommended reading.
Now for some details of the misconceptions themselves. Some of them are indeed dangerous
misunderstandings of statistical theory, such as “If the 95% confidence intervals that’s
constructed for one sample partially overlaps the 95% confidence interval that’s constructed
for a second sample, the two samples are not significantly different from each other at α =
0.05.”
Some of them are statistical paradoxes that have been widely written about e.g. the birthday
problem and the Monty Hall problem.
But some of them are not so much misconceptions as simplifications that many educators
would use to introduce a topic in an unequivocal manner, so as to leave exceptions and special
cases for later study. For instance, “The null hypothesis is always a statement of ‘no difference’”
would frequently be used by educators to introduce hypothesis testing. Or, “there are three
different measures of central tendency: the mean, the median and the mode”. Of course there
are others, but these three cover a wide variety of situations, and even three may be more than
is needed for an introductory statistics course.
This book has the potential to shake statisticians out of any complacency they have about
conveying the precise meaning of fundamental statistical theory and methods. Since the aim of
statistics is to be accurate and precise, this book could usefully find a place on the shelves of
most statisticians.
Alice Richardson: alice.richardson@canberra.edu.au
Faculty of Information Sciences and Engineering, University of Canberra
Bruce ACT 2601, Australia

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
324 SHORT BOOK REVIEWS

Statistical Inference, Econometric Analysis and Matrix Algebra: Festschrift in


Honour of Götz Trenkler
Bernhard Schipp, Walter Krämer (Editors)
Physica-Verlag, 2009, xvi + 434 pages, € 119.95 / £ 108.00 / US$ 189.00, hardcover
ISBN: 978-3-7908-2120-8

Table of contents
Part I. Nonparametric Inference 14. Minimum description length model selection
1. Adaptive tests for the c-sample location problem in Gaussian regression under data constraints
(H. Büning) (E.P. Liski, A. Liski)
2. On nonparametric tests for trend detection in 15. Self-exciting extreme value models for stock
seasonal time series (O. Morell, R. Fried) market crashes (R. Herrera, B. Schipp)
3. Nonparametric trend tests for right-censored 16. Consumption and income: a spectral analysis
survival times (S. Leissen, U. Ligges, M. (D.S.G. Pollock)
Neuhäuser, L.A. Hothorn) Part V. Stochastic Processes
4. Penalty specialists among goalkeepers: a 17. Improved estimation strategy in multi-factor
nonparametric Bayesian analysis of 44 years of Vasicek model (S. Ejaz Ahmed, S. Nkurunziza,
German Bundesliga (B. Bornkamp, A. Fritsch, S. Liu)
O. Kuss, K. Ickstadt) 18. Bounds on expected coupling times in a Markov
5. Permutation tests for validating computer chain (J. J. Hunter)
experiments (T. Mühlenstädt, U. Gather) 19. Multiple self-decomposable laws on vector spaces
Part II. Parametric Inference and on groups: the existence of background
6. Exact and generalized confidence intervals in the driving processes (W. Hazod)
common mean problem (J. Hartung, G. Knapp) Part VI. Matrix Algebra and Matrix Computations
7. Locally optimal tests of independence for 20. Further results on Samuelson’s inequality
archimedean copula families (J. Rahnenführer) (R.W. Farebrother)
Part III. Design of Experiments and Analysis of 21. Revisitation of generalized and hypergeneralized
Variance projectors (O.M. Baksalary)
8. Optimal designs for treatment-control 22. On singular periodic matrices (J. Groß)
comparisons in microarray experiments 23. Testing numerical methods solving the linear
(J. Kunert, R.J. Martin, S. Rothe) least squares problem (C. Weihs)
9. Improving Henderson’s method 3 approach when 24. On the computation of the Moore–Penrose
estimating variance components in a two-way inverse of matrices with symbolic elements
mixed linear model (R. al Sarraj, D. von Rosen) (K. Schmidt)
10. Implications of dimensionality on measurement 25. On permutations of matrix products (H.J. Werner,
reliability (K. Vehkalahti, S. Puntanen, I. Olkin)
L. Tarkkonen) Part VII. Special Topics
Part IV. Linear Models and Applied Econometrics 26. Some comments on Fisher’s α index of diversity
11. Robust moment based estimation and inference: and on the Kazwini cosmography (O.M. Baksalary,
the generalized Cressie–Read estimator K.L. Chu, S. Puntanen, G.P.H. Styan)
(R.C. Mittelhammer, G.G. Judge) 27. Ultimatum games and fuzzy information
12. More on the F-test under nonspherical (P. Sander, P. Stahlecker)
disturbances (W. Krämer, C. Hanck) 28. Are Bernstein’s examples on independent
13. Optimal estimation in a linear regression model events paradoxical? (C. Stepniak,
֒ T. Owsiany)
using incomplete prior information 29. A classroom example to demonstrate statistical
(H. Toutenburg, Shalabh, C. Heumann) concepts (D. Trenkler)
Selected Publications of Götz Trenkler

Readership: Researchers interested in statistics and econometrics, both theoretical and applied.

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 325

The book is dedicated to Professor Götz Trenkler on the occasion of his 65th birthday. It starts
with a short biography of Professor Trenkler, including main scientific achievements as well as
some personal details as his passion for chess and tennis.
In the first part one can find the list of contributors – there are 55 authors who published their
papers in the volume, and 17 of them are also Professor Trenkler’s co-authors.
The volume contains the collection of 29 articles, which are divided to seven sections, which
present Professor Trenkler’s interests, such as nonparametric and parametric inference, design
of experiments and analysis of variance, linear models and applied econometrics, stochastic
processes, matrix algebra and matrix computations in its relation to statistics. In the articles
published in the volume authors refer to the papers of Professor Trenkler, such as e.g. the
common papers with H. Toutenburg and E. P. Liski (1992), H. Büning (1994), J. Diersen (1996,
2001), D. Trenkler (1983), S. Puntanen (2006), J. Groß (1997), and K. Schmidt (2006).
Finally, the list of selected publications of Professor Götz Trenkler is presented. Götz Trenkler
is the author and coauthor of 8 monographs and 159 scientific articles in well known journals
as well as proceedings of conferences and festschrifts in honor of his colleagues. In his papers
Professor Trenkler presents both theoretical results and applications.
Katarzyna Filipiak: kasfil@up.poznan.pl
Department of Mathematical and Statistical Methods
Poznań University of Life Sciences
Wojska Polskiego 28, 60-637 Poznań, Poland

Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology


Andrew B. Lawson
Chapman & Hall/CRC, 2008, xviii + 344 pages, £ 49.99 / US$ 79.95, hardcover
ISBN: 978-1-58488-840-6

Table of contents
Part I. Background 6. Disease cluster detection
1. Introduction 7. Ecological analysis
2. Bayesian inference and modeling 8. Multiple scale analysis
3. Computational issues 9. Multivariate disease analysis
4. Residuals and goodness-of-fit 10. Spatial survival and longitudinal analysis
Part II. Themes 11. Spatiotemporal disease mapping
5. Disease map reconstruction and relative A. Basic R and WinBUGS
risk estimation B. Selected WinBUGS code
C. R code for thematic mapping

Readership: Statisticians and epidemiologists interested in spatial and spatiotemporal Bayesian


modeling and analysis and disease mapping.
Lawson begins by building a solid Bayesian background in Chapters 2, 3, and 4, covering
both basic theory and methods, computation, and residuals and measures of goodness of fit,
including AIC, BIC, and DIC. Only the treatment of priors is not as strong as that of the other
aspects of Bayesian analysis. For example the inverse gamma with very small hyperparameters
is mentioned following Gelman (Bayesian Analysis, 2006), but it is not pointed out that Gelman
is actually very negative about this particular prior and mentions it as an example of priors
not to choose for the standard deviation of random effects. He recommends the uniform,
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
326 SHORT BOOK REVIEWS

which appears later in this book, and the half Cauchy, a relatively new prior introduced by
Gelman.
The remaining seven chapters provide a thorough review of modeling relative risk, and its
analysis and mapping, different types of clustering in spatial (or spatiotemporal) distribution of
disease, multiple (spatial) scale analysis of disease distribution, and spatiotemporal modeling.
Other important topics include establishing relations between geo-referenced data and predictors
like poverty, exposure to hazardous material, etc. Also included are multivariate disease analysis
and spatial survival analysis.
Lawson provides well written reviews of many topics and many aspects of those topics are
covered in his reviews. The literature cited is huge and diverse, showing the current importance
of the subjects covered. One can also gain hands training in analysis and visual presentations, so
stressed in the book, by following carefully the detailed introduction to R and WinBUGS given
in the book.
Many important data sets used in the book are available at http://www.musc.edu/biometry/
people/lawsonab/Data%20and%20Programs.html but do not seem to be available at the site
given on p. 5 of the book.
Readers of this book might also consider reading the monograph by Banerjee, Carlin, and
Gelfand (Hierarchical Modeling and Analysis for Spatial Data, 2004), also published by
Chapman and Hall. This could be a good companion volume stressing more basic theory
and also models based on Gaussian processes and variograms.

Jayanta K. Ghosh: ghosh@stat.purdue.edu


Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

Design and Analysis of Bioavailability and Bioequivalence Studies, Third Edition


Shein-Chung Chow, Jen-pei Liu
Chapman & Hall/CRC, 2008, xxii + 733 pages, £ 63.99 / US$ 99.95, hardcover
ISBN: 978-1-58488-668-6

Table of contents

I Preliminaries III Population and Individual Bioequivalence


1. Introduction 11. Population and individual bioequivalence
2. Design of bioavailability studies 12. Statistical procedures for assessment of population
3. Statistical inference for effects from a standard and individual bioequivalence
2 × 2 crossover design IV In Vitro and Alternative Evaluation
II Average Bioequivalence Bioequivalence
4. Statistical methods for average bioequivalence 13. Assessment of bioequivalence for drugs with
5. Power and sample size determination negligible plasma levels
6. Transformation and analysis of individual 14. In vitro bioequivalence testing
subject ratios 15. In vitro dissolution profiles comparison
7. The assessment of inter- and intra-subject V Other Bioequivalence Studies
variabilities 16. Meta-analysis for bioequivalence review
8. Assumptions of outlier detection for average 17. Population pharmacokinetics
bioequivalence 18. Other pharmacokinetic studies
9. Optimal crossover designs for two formulations 19. Review of regulatory guidances on bioequivalence
for average bioequivalence 20. Frequently asked questions and future challenges
10. Assessment of average bioequivalence for more Appendix A. Statistical tables
than two formulations Appendix B. SAS programs

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
SHORT BOOK REVIEWS 327

Readership: Statisticians in pharmaceutical companies and FDA, biostatisticians interested in


Bioequivalence, Bioavailability and how to test these, and FDA regulations and guidelines
relating to these very important topics.
Most statisticians, including the present reviewer, are aware that the null and alternative
hypotheses for testing Bioequivalence are the opposite of the usual null and alternative for
tests of superiority of a new drug over the old drug in the market. Of course the same statistical
philosophy underlies both cases for superiority tests, the claim for the new drug is the alternative,
to be established by producing enough evidence, while its negation is the null and represents a
sort of status quo that we abandon only when enough evidence is produced.
In the same vein, the claim that a new generic drug is Bioequivalent to the old drug, whose
patent is running out, is the alternative and its negation, namely, that they are different is the
null, representing status quo. This makes the problem very intriguing since we rarely have to test
such nulls. In this context, it is interesting that Lehmann, in his famous book, Testing Statistical
Hypotheses, published in 1959, showed that its usual formulation
H0 : absolute value of deviation of population means is ≥ δ
vs
H1 : the above absolute value is < δ
has an UMP Test (for given δ > 0 and some level of significance α). Lehmann’s solution came
many years before testing Bioequivalence became a hot problem.
Little do we know that the problem of Bioequivalence is far more complex and remains one of
the most challenging problems where consensus doesn’t exist on many aspects of what should be
tested and how. For example, Bioequivalence of population averages or at the level of individual
patients (so switching is possible)? Moreover, there are ethical questions as to whether the test
should be made on patients or healthy volunteers available, since there is no question of the new
drug being superior. At best it can be equivalent. If the test is to be made on healthy volunteers,
it can’t be a clinical trial that tests the effects of the drugs. Rather testing on volunteers can
only help decide whether the two drugs have the same sort of pharmaco-kinetic parameters, for
example whether the drugs are available at the site where a drug is needed, the period it’s there,
its strength, etc. In other words that would be testing Bioavailability, rather than Bioequivalence,
though clearly both are closely related.
Of over seven hundred pages, the book provides an encyclopedic coverage of all these issues
and more. It is divided up into five parts; each part is further subdivided into chapters. Part 1
explains what Bioequivalence means, the history of the evolution of this concept as well as
that of Bioavailability, the pharmaco-kinetic parameters that are used to measure it. Part 1 also
discusses some basic designs and inference on population averages. Part 2 provides a detailed
discussion of inference on comparing Bioavailability at the level of population averages. There
is also a detailed discussion of FDA regulations. These two parts could lead to a good course on
Bioequivalence and its proxy, namely, Bioavailability.
The remaining chapters are more specialized. Part III deals with tests at the level of individuals,
but these individuals are still healthy volunteers and the tests are on Bioavailability. FDA
recognizes the importance of such tests but they are still unregulated. Part V provides a review
of many other related topics, like testing of inhalers, which leave no trace on blood and so require
a different definition of Bioavailability, its measurement and test.
The reviewer notes that after FDA approval, generic drugs usually replace the original, more
expensive drug. Such data could lead to post approval studies on Bioequivalence directly, without
taking resort to proxies. Probably such studies are already taking place. If not, the data mentioned
International Statistical Review (2009), 77, 2, 300–328

C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute
328 SHORT BOOK REVIEWS

about could be a rich source of information at the level of both individuals and the population
average.
Jayanta K. Ghosh: ghosh@stat.purdue.edu
Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

International Statistical Review (2009), 77, 2, 300–328



C 2009 The Authors. Journal compilation  C 2009 International Statistical Institute

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy