R For Statistics PDF
R For Statistics PDF
Statistics
Statistics
This page intentionally left blank
R for
Statistics
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a pho-
tocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Contents
Preface xiii
I An Overview of R 1
1 Main Concepts 3
1.1 Installing R . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Work Session . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Session in Linux . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Session in Windows . . . . . . . . . . . . . . . . . . . 4
1.2.3 Session on a Mac . . . . . . . . . . . . . . . . . . . . . 5
1.3 Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Online Assistance . . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Help on CRAN . . . . . . . . . . . . . . . . . . . . . . 6
1.4 R Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Creating, Displaying and Deleting Objects . . . . . . . 6
1.4.2 Type of Object . . . . . . . . . . . . . . . . . . . . . . 7
1.4.3 The Missing Value . . . . . . . . . . . . . . . . . . . . 8
1.4.4 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.4.1 Numeric Vectors . . . . . . . . . . . . . . . . 9
1.4.4.2 Character Vectors . . . . . . . . . . . . . . . 10
1.4.4.3 Logical Vectors . . . . . . . . . . . . . . . . . 11
1.4.4.4 Selecting Part of a Vector . . . . . . . . . . . 12
1.4.4.5 Selection in Practice . . . . . . . . . . . . . . 13
1.4.5 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.5.1 Selecting Elements or Part of a Matrix . . . 14
1.4.5.2 Calculating with the Matrices . . . . . . . . 16
1.4.5.3 Row and Column Operations . . . . . . . . . 17
1.4.6 Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.7 Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4.7.1 Creating a List . . . . . . . . . . . . . . . . . 20
1.4.7.2 Extraction . . . . . . . . . . . . . . . . . . . 20
1.4.7.3 A Few List Functions . . . . . . . . . . . . . 21
1.4.7.4 Dimnames List . . . . . . . . . . . . . . . . . 21
1.4.8 Data-Frames . . . . . . . . . . . . . . . . . . . . . . . 22
1.5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.5.1 Arguments of a Function . . . . . . . . . . . . . . . . 23
v
vi R for Statistics
1.5.2 Output . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.6 Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.6.1 What Is a Package ? . . . . . . . . . . . . . . . . . . . 24
1.6.2 Installing a Package . . . . . . . . . . . . . . . . . . . 24
1.6.3 Updating Packages . . . . . . . . . . . . . . . . . . . . 25
1.6.4 Using Packages . . . . . . . . . . . . . . . . . . . . . . 25
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2 Preparing Data 29
2.1 Reading Data from File . . . . . . . . . . . . . . . . . . . . . 29
2.2 Exporting Results . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Manipulating Variables . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Changing Type . . . . . . . . . . . . . . . . . . . . . . 34
2.3.2 Dividing into Classes . . . . . . . . . . . . . . . . . . . 35
2.3.3 Working on Factor Levels . . . . . . . . . . . . . . . . 36
2.4 Manipulating Individuals . . . . . . . . . . . . . . . . . . . . 39
2.4.1 Identifying Missing Data . . . . . . . . . . . . . . . . . 39
2.4.2 Finding Outliers . . . . . . . . . . . . . . . . . . . . . 42
2.5 Concatenating Data Tables . . . . . . . . . . . . . . . . . . . 43
2.6 Cross-Tabulation . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 R Graphics 51
3.1 Conventional Graphical Functions . . . . . . . . . . . . . . . 51
3.1.1 The Plot Function . . . . . . . . . . . . . . . . . . . . 52
3.1.2 Representing a Distribution . . . . . . . . . . . . . . . 58
3.1.3 Adding to Graphs . . . . . . . . . . . . . . . . . . . . 60
3.1.4 Graphs with Multiple Dimensions . . . . . . . . . . . 64
3.1.5 Exporting Graphs . . . . . . . . . . . . . . . . . . . . 66
3.1.6 Multiple Graphs . . . . . . . . . . . . . . . . . . . . . 67
3.1.7 Multiple Windows . . . . . . . . . . . . . . . . . . . . 69
3.1.8 Improving and Personalising Graphs . . . . . . . . . . 70
3.2 Graphical Functions with lattice . . . . . . . . . . . . . . . . 73
3.2.1 Characteristics of a Lattice Graph . . . . . . . . . . 75
3.2.2 Formulae and Groups . . . . . . . . . . . . . . . . . . 76
3.2.3 Customising Graphs . . . . . . . . . . . . . . . . . . . 79
3.2.3.1 Panel Function . . . . . . . . . . . . . . . . . 79
3.2.3.2 Controlling Size . . . . . . . . . . . . . . . . 80
3.2.3.3 Panel Layout . . . . . . . . . . . . . . . . . . 81
3.2.4 Exportation . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2.5 Other Packages . . . . . . . . . . . . . . . . . . . . . . 82
3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Contents vii
7 Regression 133
7.1 Simple Linear Regression . . . . . . . . . . . . . . . . . . . . 133
7.1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.1.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.1.4 Processing the Example . . . . . . . . . . . . . . . . . 134
7.1.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 138
7.1.6 Taking Things Further . . . . . . . . . . . . . . . . . . 139
7.2 Multiple Linear Regression . . . . . . . . . . . . . . . . . . . 140
7.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.2.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.2.4 Processing the Example . . . . . . . . . . . . . . . . . 141
7.2.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 145
7.2.6 Taking Things Further . . . . . . . . . . . . . . . . . . 146
7.3 Partial Least Squares (PLS) Regression . . . . . . . . . . . . 147
7.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.3.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.3.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.3.4 Processing the Example . . . . . . . . . . . . . . . . . 148
7.3.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 153
7.3.6 Taking Things Further . . . . . . . . . . . . . . . . . . 153
Contents ix
9 Classification 181
9.1 Linear Discriminant Analysis . . . . . . . . . . . . . . . . . . 181
9.1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.1.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
9.1.4 Processing the Example . . . . . . . . . . . . . . . . . 183
9.1.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 188
9.1.6 Taking Things Further . . . . . . . . . . . . . . . . . . 188
9.2 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . 190
9.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 190
9.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 190
9.2.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
9.2.4 Processing the Example . . . . . . . . . . . . . . . . . 191
9.2.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 197
9.2.6 Taking Things Further . . . . . . . . . . . . . . . . . . 198
9.3 Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.3.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.3.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.3.4 Processing the Example . . . . . . . . . . . . . . . . . 199
9.3.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 208
9.3.6 Taking Things Further . . . . . . . . . . . . . . . . . . 208
x R for Statistics
11 Clustering 241
11.1 Ascending Hierarchical Clustering . . . . . . . . . . . . . . . 241
11.1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.1.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.1.4 Processing the Example . . . . . . . . . . . . . . . . . 242
11.1.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 246
11.1.6 Taking Things Further . . . . . . . . . . . . . . . . . . 247
11.2 The k-Means Method . . . . . . . . . . . . . . . . . . . . . . 252
11.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.3 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.4 Processing the Example . . . . . . . . . . . . . . . . . 252
11.2.5 Rcmdr Corner . . . . . . . . . . . . . . . . . . . . . . . 255
11.2.6 Taking Things Further . . . . . . . . . . . . . . . . . . 256
Appendix 257
A.1 The Most Useful Functions . . . . . . . . . . . . . . . . . . . 257
A.1.1 Generic Functions . . . . . . . . . . . . . . . . . . . . 257
A.1.2 Numerical Functions . . . . . . . . . . . . . . . . . . . 257
A.1.3 Data Handling Functions . . . . . . . . . . . . . . . . 258
A.1.4 Probability Distributions . . . . . . . . . . . . . . . . 260
Contents xi
Bibliography 295
Index 301
This page intentionally left blank
Preface
This book is the English adaptation of the second edition of the book Statis-
tiques avec R which was published in 2008 and was a great success in the
French-speaking world. In this version, a number of worked examples have
been supplemented and new examples have been added. We hope that readers
will enjoy using this book for reference when working with R.
This book is aimed at statisticians in the widest sense, that is to say, all
those working with datasets: science students, biologists, economists, etc. All
statistical studies depend on vast quantities of information, and computerised
tools are therefore becoming more and more essential. There are currently
a wide variety of software packages which meet these requirements. Here we
have opted for R, which has the triple advantage of being free, comprehen-
sive, and its use is booming. However, no prior experience of the software
is required. This work aims to be accessible and useful for both novices and
experts alike.
This book is organised into two main parts: Part I focuses on the R soft-
ware and the way it works, and Part II on the implementation of traditional
statistical methods with R. In order to render them as independent as pos-
sible, a brief chapter offers extra help getting started (Chapter 5, A Quick
Start with R) and acts as a transition: It will help those readers who are
more interested in statistics than in software to be operational more quickly.
In the first chapter we present the basic elements of the software. The
second chapter deals with data processing (reading data from file, merging
factor levels, etc.), that is to say, common operations in statistics. As any
statistical report depends on clear, concise visualisation of results, in Chap-
ter 3 we detail the variety of possibilities available in this domain with R.
We first present the construction of simple graphs with the various available
options, and then detail the use of more complex graphs. Some programming
basics are then presented in Chapter 4: We explain how to construct your own
functions but also present some of the useful pre-defined functions to conduct
repetitive analyses automatically. Focusing on the R software itself, this first
main part enables the reader to understand the commands used in subsequent
statistical studies.
The second part of the book offers solutions for dealing with a wide
xiii
xiv R for Statistics
To draw this preface to a close, we would like to thank Jerome Pages and
Dominique Dehay, who enabled us to write the French version under the best
possible conditions. We would also like to thank Rebecca Clayton for the
English translation and Agrocampus Ouest for financing this translation.
Part I
An Overview of R
1
This page intentionally left blank
1
Main Concepts
1.1 Installing R
R is an open source software environment for statistics freely distributed by
CRAN (Comprehensive R Archive Network) at the following address http:
//cran.r-project.org/. The installation of R varies according to the oper-
ating system (Windows, Mac OS X, or Linux) but the functions are exactly
the same and most of the programs are portable from one system to another.
Installing R is very simple, just follow the instructions.
>
R then waits for an instruction, as indicated by the symbol > at the begin-
ning of the line. Each instruction must be validated by Enter to be run. If
the instruction is correct, R again waits for another instruction, as indicated
by >. If the instruction is incomplete, R yields the symbol +. You must
then complete the instruction or take control again by typing Ctrl + c or
Esc. If the instruction is incorrect, an error message will appear.
For each project, we recommend you create a file in which an image of the
session concerning this project will be saved. We also recommend that users
write the commands in a text file in order to be able to use them again when
needed.
By default, R saves all the objects created (variables, results tables, etc.).
At the end of the session, these objects can be saved in an image of the session
.RData, using the command save.image() or when closing a session. To close
a session in R, use the quit function:
> q()
Save workspace image? [y/n/c]:
The backup file can be found in the project file (we will see later how
to change the destination). Saved objects are thus available for future use
using the load function. We will now look at the differences between a session
opened in Linux, in Windows, and on a Mac.
The best way to change the working directory is to use the setwd() func-
tion or by going to File then Change dir....
1.3 Help
1.3.1 Online Assistance
For help with a given function, for example the mean function, simply use
> help(mean)
or
> ?mean
Help is displayed directly in the interface. It is also possible to display
help in HTML format in a web browser. For this option, type
> help.start()
HTML help offers a search function as well as functions grouped under differ-
ent headings. This is probably the most appropriate help option for a novice
user.
6 R for Statistics
Within the help section for each function you will find the list of arguments
to be detailed, along with their default value where appropriate, the list of
results returned by the function, and the name of the programmer. At the
end of the help section, one or more examples of how the function is used are
presented and can be copied and pasted into R in order to better understand
its use.
1.4 R Objects
R uses functions or operators which act on objects (vectors, matrices, etc.).
All the functions in this book are written in bold. This section deals with
presenting and manipulating different R objects.
> data(iris)
This command is used to load the iris dataset, which is now available. Data
can also be read directly from a file using the function read.table (see Sec-
tion 2.1), for example,
> ozone <- read.table("ozone.txt",header=TRUE)
If an object does not exist, the allocation creates it. If not, the allocation
replaces the previous value without warning. We display the value of an
object x via the command
> print(x)
or, more simply
> x
By default, R saves all the objects created in the session. We therefore
recommend that you delete these objects regularly. To find the objects from
a session, use the function objects() or ls(). To delete the object x, type
> rm(x)
and to delete multiple objects
> rm(object1,object2,object3)
Finally, to delete a list of objects for which a part of the name is common, for
example the letter a, use
> rm(list=ls(pattern=".*a.*"))
It is also possible to test which mode a particular object x belongs to. The
results are Booleans with values that are either TRUE or FALSE:
> is.null(x)
> is.logical(x)
> is.numeric(x)
> is.complex(x)
> is.character(x)
It is possible to convert the object x of a given mode explicitly using the
following commands:
> as.logical(x)
> as.numeric(x)
> as.complex(x)
> as.character(x)
However, we must be careful with regard to the meaning of these conver-
sions. R always yields a result for every conversion instruction even if it is
meaningless. For example, from the following conversion table:
From To Function Conversions
Logical Numeric as.numeric FALSE 0
TRUE 1
Logical Character as.character FALSE "FALSE"
TRUE "TRUE"
Character Numeric as.numeric "1", "2", ... 1, 2, ...
"A", ... NA
Character Logical as.logical "FALSE", "F" FALSE
"TRUE", "T" TRUE
Other characters NA
Numeric Logical as.logical 0 FALSE
Other numerics TRUE
Numeric Character as.character 1, 2, ... "1", "2", ...
Each object has two intrinsic attributes: its mode mode() and its length
length(). There are also other specific attributes which vary according to the
type of object: dim, dimnames, class, names. It is possible to request a list of
these specific attributes by running the command
> attributes(object)
> x <- NA
> print(x+1)
> NA
In order to know where to find the missing values for an object x, we must
ask the question:
> is.na(x)
This yields a Boolean of the same length as x. The question is also asked ele-
ment by element. In the case of a vector, this yields a logical vector the same
length as x with TRUE if the corresponding element in x is NA, and FALSE if not.
Other special values should be noted here: Inf for infinity and NaN for
Not a Number, values resulting from calculation problems; for example
exp(1e10) or log(-2), respectively.
1.4.4 Vectors
Vectors are atomic objects, that is to say, they are of a unique type (null, log-
ical, numeric, etc.), made up of a set of values called components, coordinates
or elements. The attribute length, obtained using the function length, gives
the number of elements in the vector.
> 1:6
[1] 1 2 3 4 5 6
> seq(1,6,by=0.5)
[1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
> seq(1,6,length=5)
[1] 1.00 2.25 3.50 4.75 6.00
> rep(1,4)
[1] 1 1 1 1
> rep(c(1,2),each=3)
[1] 1 1 1 2 2 2
Construction by the scan function. R will then ask you to enter the el-
ements one at a time. Using the argument n=3, we specify that three
elements will be collected. If we do not specify n, the collection is stopped
by putting in an empty value.
> paste(c("X","Y"),1:5,"txt",sep=".")
[1] "X.1.txt" "Y.2.txt" "X.3.txt" "Y.4.txt" "X.5.txt"
> paste(c("X","Y"),1:5,sep=".",collapse="+")
[1] "X.1+Y.2+X.3+Y.4+X.5"
The argument collapse groups together all the elements in a vector with a
length of 1. For an extraction, use the function substr:
> substr("freerider",5,9)
[1] "rider"
This extracts the characters ranked from 5 to 9 from freerider which yields
"rider".
1.4.5 Matrices
Matrices are atomic objects, that is to say, they are of the same mode or
type for all values. Each value of the matrix can be located by its row and
column numbers. The two intrinsic attributes of an R object are its length,
which here corresponds to the total number of elements in the matrix, and
its in the object mode, which here corresponds to the mode of the elements
in this matrix. The matrices also include the dimension attribute dim, which
yields the number of rows and the number of columns. They can also possess
an optional attribute dimnames (see Section 1.4.7.4, p. 21). Hereafter are the
main ways of creating a matrix. The most widely used is the matrix function
which takes as its arguments the vector of elements and the number of rows
or columns in the matrix:
> m <- matrix(c(1,17,12,3,6,0),ncol=2)
> m
[,1] [,2]
[1,] 1 3
[2,] 17 6
[3,] 12 0
> m <- matrix(1:8,nrow=2)
> m
[,1] [,2] [,3] [,4]
[1,] 1 3 5 7
[2,] 2 4 6 8
By default, R ranks the values in a matrix by column. To rank the elements
by row, use the argument byrow:
14 R for Statistics
> m[i,]
> m[i,,drop=F]
gives row i in the form of a one-row matrix and not of a vector, and thus
the row name can be retained.
> m[,c(2,2,1)]
gives the second, the second and the first columns: This is a three-column
matrix.
Selection by negative integers:
Selection by logicals:
The following instruction yields only the columns of m for which the value
on the first line is strictly greater than 2:
> m[,m[1,]>2]
[,1] [,2]
[1,] 3 4
[2,] 7 8
> m[m>2]
[1] 5 6 3 7 4 8
16 R for Statistics
The following instruction replaces the values of m which are greater than
2 with NA:
The following table gives the most useful functions for linear algebra.
Function Description
X%*%Y Product of matrices
t(X) Transposition of a matrix
diag(5) Identity matrix of order 5
diag(vec) Diagonal matrix with the values of vector vec in the
diagonal
crossprod(X,Y) Cross product (t(X)%*%Y)
det(X) Determinant of matrix X
svd(X) Singular value decomposition
eigen(X) Matrix diagonalisation
solve(X) Matrix inversion
Main Concepts 17
Function Description
solve(A,b) Solving linear systems
chol(Y) Cholesky decomposition
qr(Y) QR decomposition
Below is an example of how the functions eigen and solve are used:
> A <- matrix(1:4,ncol=2)
> B <- matrix(c(5,7,6,8),ncol=2)
> D <- A%*%t(B)
> D
[,1] [,2]
[1,] 23 31
[2,] 34 46
> eig <- eigen(D)
> eig
$values
[1] 68.9419802 0.0580198
$vectors
[,1] [,2]
[1,] -0.5593385 -0.8038173
[2,] -0.8289393 0.5948762
The eigen function yields the eigenvalues in the object values and the eigen-
vectors in the object vectors. To extract the first eigenvector, simply write
> eig$vectors[,1]
For example, to solve the following system of equations:
23 x + 31 y = 1
34 x + 46 y = 2
which is the same as writing Dz=V, with V=c(1,2), we use the solve function
> V <- c(1,2)
> solve(D,V)
[1] -4 3
The solution is therefore x = 4 and y = 3.
> cbind(c(1,2),c(3,4))
[,1] [,2]
[1,] 1 3
[2,] 2 4
1.4.6 Factors
Factors are vectors which enable the user to manipulate qualitative data.
Length is determined by the length function, mode by mode and the categories
of the factor by levels. They form a class of objects and are treated differently
depending on the function, such as the plot function for graphs. Factors can be
non-ordinal (male, female) or ordinal (level of skiing ability). Three functions
can be used to create factors:
Main Concepts 19
To know the levels, the number of levels and the number of factors per
level of somersault.f, we use
> levels(somersault.f)
[1] "1" "2" "3" "4" "5"
> nlevels(somersault.f)
[1] 5
> table(somersault.f)
somersault.f
1 2 3 4 5
2 2 2 2 2
20 R for Statistics
The table function can also be used to construct cross tabulations (Sec-
tion 2.6, p. 46).
To convert the factors into numerics, we use the following instructions:
> x <- factor(c(10,11,13))
> as.numeric(x)
[1] 1 2 3
> as.numeric(as.character(x))
[1] 10 11 13
Details of this conversion are given in Section 2.3.1 (p. 35).
1.4.7 Lists
A list is a heterogeneous object. It is a set of ranked objects which do not
always have the same mode or length. The objects are referred to as compo-
nents of the list. These components can be named. Lists have the same two
attributes as vectors (length and mode), as well as the additional attribute
names. Lists are important objects as all the functions which yield multiple
objects do so in the form of a list (see Section 1.5).
1.4.7.2 Extraction
To extract a component from the list, we can simply indicate the position of
the element that we want to extract. [[ ]] are used to identify the element
in the list:
Main Concepts 21
> mylist[[3]]
[1] M M F M F M M M
Levels: F M
> mylist[[1]]
[1] 2 5 8
We can also use the name of the element, if it has one, which can be written
in two ways:
> mylist$sex
[1] M M F M F M M M
Levels: F M
> mylist[["sex"]]
[1] M M F M F M M M
Levels: F M
It is possible to extract multiple elements from the same list, thus creating a
sub-list. N.B. Here we use [ ] and not [[ ]]:
> mylist[c(1,3)]
$vec
[1] 2 5 8
$sex
[1] M M F M F M M M
Levels: F M
1.4.8 Data-Frames
Data-frames are special lists with components of the same length but with po-
tentially different modes. The data tables generally used in statistics are often
referred to as data-frames. Indeed, a data table is made up of quantitative
and/or qualitative variables taken from the same individuals.
The main ways of creating a data-frame are to use the data.frame function
to concatenate vectors of the same size but which may have different modes; or
the read.table function (see Section 2.1) to read a data table; the as.data.frame
function for explicit conversion. For example, if we concatenate the numeric
vector vec1 and the alphabetic vector vec2:
> vec1 <- 1:5
> vec2 <- c("a","b","c","c","b")
> df <- data.frame(name.var1 = vec1, name.var2 = vec2)
> df
name.var1 name.var2
1 1 a
2 2 b
3 3 c
4 4 c
5 5 b
In order to extract components from the data-frame, we can use the same
methods as those presented for lists or matrices.
Main Concepts 23
Remarks
To convert a matrix to a data-frame, use the as.data.frame function.
1.5 Functions
Functions are R objects. A great number of functions are predefined in R, for
example, mean which calculates the mean, min which calculates the minimum,
etc. It is also possible to create your own functions (see Section 4.3). Functions
admit arguments as input and yield results as output.
It is not crucial to specify the name of the arguments, as long as the order
is respected. This order is defined when the function is created. The args
function yields the arguments of a function:
> args(rnorm)
function (n, mean = 0, sd = 1)
NULL
24 R for Statistics
1.5.2 Output
Most of the functions yield multiple results and the entirety of these results is
contained within a list. To visualise all these outputs, you first need to know
all the elements. This information can be accessed using the names function.
Use [[j]] to extract element j from the list, or indeed $ if the elements in
the list are named (see Section 1.4.7, p. 20).
1.6 Packages
1.6.1 What Is a Package ?
A package or library of external programs is simply a set of R programs which
supplements and enhances the functions of R. Packages are generally reserved
for specific methods or fields of application.
There are more than 3000 packages (as of June 2011). A certain number
of these are considered essential (MASS, rpart, etc.) and are provided with
R. Others are more recent statistical advances and can be downloaded freely
from the CRAN network. Furthermore, many more packages are available
within projects, such as BioConductor which contains packages dedicated to
processing genomic data.
> install.packages(dependencies=TRUE)
Then choose the nearest mirror to you and select the package to install, such
as the FactoMineR package. You then simply need to load it in order to use
it in an R session:
> library(FactoMineR)
It is also possible to download a package using one of the drop-down menus
in Windows (Packages Install package(s)...) or Mac OS X (Packages
& Data Package Installer). It is also possible to configure the use of a
proxy (help(download.file)) and the destinations where the packages will
be installed. All the packages installed on a given computer can be listed
using the command
> library()
[,1] [,2]
[1,] -0.07788253 1.6681649
[2,] -1.05816423 0.8399431
[3,] -0.49608637 0.8387077
1.7 Exercises
Exercise 1.1 (Creating Vectors)
1. Create the following vectors using the rep function:
vec1 = 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
vec2 = 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
vec3 = 1 1 2 2 2 3 3 3 3 4 4 4 4 4
8. Create a vector vec7 from vec2 where the NA values are replaced by values
selected at random from vec2.
Exercise 1.3 (Creating, Manipulating and Inverting a Matrix)
1. Create the following matrix mat (with the column and row names):
5. Calculate the determinant and then invert the matrix using the appropriate
functions.
Exercise 1.4 (Selecting and Sorting in a Data-Frame)
1. From the iris dataset available in R (use data(iris) to load it and then
iris[1:5,] to visualise the first five rows), create a sub-dataset comprising
only the data for the category versicolor of the variable species (call this
new dataset iris2).
3. Again using the apply function, calculate all the deciles for each of the
three variables using the argument probs of the quantile function.
2. Using the is.numeric and the lapply functions, create a Boolean vector of
length the number of columns of the data-frame Aids2 which takes the value
TRUE when the corresponding column of Aids2 is not numeric and FALSE when
it is numeric.
3. Select the qualitative variables of the Aids2 data-frame and assign the
result in a data-frame: Aids2.qual.
2. Select the rows of Aids2 where the sex is Male (M) and the state is not
Other. Put the result in a new data-frame: res.
3. Summarise the res dataset and check that a) the sex variable has no
female and b) the female level is still present.
4. Print the attributes of the sex variable (function attributes). Check that
a) there is a level attribute with F and M, and b) the class of the object is
factor.
5. Using as.character, transform the sex variable (of res) into an object
named sexc of type character. Check that sexc has no attributes.
6. Using as.factor, transform sexc into an object named sexf of type factor.
Check that the resulting object sexf has a) a level attribute, b) a class factor
and c) no more F level.
Conclusion: Transforming a factor into a vector of characters and transforming
back into a factor resets the levels.
7. Using Exercise 1.7, create a Boolean vector of length the number of columns
of res which takes the value TRUE for categorical variables and FALSE for
numeric variables.
8. Transform all factors of res into characters using lapply.
In statistics, data is the starting point for any analysis. It is therefore es-
sential to know how to master operations such as importing and exporting
data, changing types, identifying individuals with missing values, concatenat-
ing factor levels, etc. These different notions are therefore presented in this
chapter, which aims to be both concise and satisfactory in practice.
The file name is between speech marks (" or but not the same word-
processing speech marks found in Microsoft
R Word). The sep option in-
dicates that the character which separates the columns is here ",". For a
space, use " ", and for a tabulation, "\t". The argument header indicates
whether or not the first row contains the names of the variables. The argu-
ment dec specifies that the decimal point is a "." (here for the number 175.5).
Finally, the argument row.names indicates that column 1 is not a variable but
the row identifier: here the individuals names.
> summary(tab)
height weight size sex
Min. :175.5 Min. :72.00 Min. :7.00 F:1
1st Qu.:176.8 1st Qu.:75.00 1st Qu.:7.75 M:3
Median :178.0 Median :78.00 Median :8.25
Mean :179.2 Mean :76.67 Mean :8.25
3rd Qu.:181.0 3rd Qu.:79.00 3rd Qu.:8.75
Max. :184.0 Max. :80.00 Max. :9.50
NAs : 1.0 NAs : 1.00
Here, the variable types are correct: height, weight and shoe size are quan-
titative and sex is qualitative with two categories (F and M). If this is not
the case, it may be necessary to change the variable type (see Section 2.3.1).
Sometimes, if the reading does not function correctly, the error may stem from
the text file itself. Below is a list of some of the most common problems:
Column separator incorrectly specified
Decimal point incorrectly specified (the variables may be mistakenly con-
sidered qualitative)
sex M F M
age 18 17 22
32 R for Statistics
This file was imported in the traditional manner, remembering that the first
column contains the names of the variables:
We transpose the table (the rows become the columns and vice versa):
> tab2 <- t(tab)
> summary(tab2)
sex age
F:1 17:1
M:2 18:1
22:1
The sex variable poses no problem, but the age variable is summarised as a
three-level factor. In fact, the result is no longer a data-frame but a character
matrix.
> is.data.frame(tab2)
[1] FALSE
> is.matrix(tab2)
[1] TRUE
sex age
1 M 18
2 F 17
3 M 22
The write.infile function from the FactoMineR package can be used to write
all the objects of a list in the same file, without having to specify each of
the objects in the list.
> Xfac
[1] 10 10 10 12 12 13 13 13 13
Levels: 10 12 13
> summary(Xfac)
10 12 13
3 2 4
Displaying a factor enables the user to distinguish it from a numeric, thanks
to the presence of different levels at the end of the display. This is also the
case for the summary yielded by summary.
Converting a factor to a numeric is a two-step process. We first convert
the factor to a character vector and then convert this vector to a numeric.
If we convert the factor directly to a numeric, the levels will be re-coded in
order (the first level will be 1, the second 2, etc.):
Let us start with the first option. We assign X a vector of fifteen random
numbers following a standard normal distribution:
> set.seed(654) ## set generator seeds
> X <- rnorm(15)
> X
[1] -0.76031762 -0.38970450 1.68962523 -0.09423560 0.095301
[7] 1.06576755 0.93984563 0.74121222 -0.43531214 -0.107260
[13] -0.98260589 -0.82037099 -0.87143256
36 R for Statistics
Let us divide this variable into three levels: between the minimum and 0.2,
between 0.2 and 0.2, and finally between 0.2 and the maximum. The cut
function can be used to conduct this step automatically. The classes are of the
format (ai ; ai+1 ]. In order to include the minimum of X in the first class, you
must use the argument include.lowest=TRUE. The first class will therefore
be of the form [a1 , a2 ]:
> Xfac <- cut(X,breaks=c(min(X),-0.2,0.2,max(X)),
include.lowest=TRUE)
> Xfac
[1] [-0.983,-0.2] [-0.983,-0.2] (0.2,1.69] (-0.2,0.2]
[5] (-0.2,0.2] (0.2,1.69] (0.2,1.69] (0.2,1.69]
[9] (0.2,1.69] [-0.983,-0.2] (-0.2,0.2] [-0.983,-0.2]
[13] [-0.983,-0.2] [-0.983,-0.2] [-0.983,-0.2]
Levels: [-0.983,-0.2] (-0.2,0.2] (0.2,1.69]
> table(Xfac)
Xfac
[-0.983,-0.2] (-0.2,0.2] (0.2,1.69]
7 3 5
The result of the division is a factor. When we list observations by category
(with the table function), we obtain unequal class sizes. If we require similar
class sizes for each of the three categories, we use the quantile function. This
function is used to calculate empirical quantiles and thus gives the required
thresholds. The thresholds are as follows:
The minimum, which is the empirical quantile associated with the proba-
bility 0: Pn (X < min(X)) = 0
The first threshold, which is the empirical quantile associated with the
probability 1/3: Pn (X threshold1 ) = 1/3
The second threshold, which is the empirical quantile associated with the
probability 2/3: Pn (X threshold2 ) = 2/3
The maximum, which is the empirical quantile associated with the prob-
ability 1: Pn (X max(X)) = 1
> mybreaks <- quantile(X,probs=seq(0,1,length=4))
> Xfac <- cut(X,breaks=mybreaks,include.lowest=TRUE)
> table(Xfac)
Xfac
[-0.983,-0.544] (-0.544,0.311] (0.311,1.69]
5 5 5
How can multiple factor levels be merged if, for example, they have insuf-
ficient samples?
We want to change the titles of the levels. To do this, simply assign a new
value to the levels of Xfac called using the levels function:
> levels(Xfac) <- c("lev1","lev2","lev3")
> Xfac
[1] lev1 lev2 lev3 lev2 lev2 lev3 lev3 lev3 lev3
[10] lev2 lev2 lev1 lev1 lev1 lev1
Levels: lev1 lev2 lev3
To merge two levels such as levels 1 and 3, simply rename the two levels with
the same name (i.e. repeat it):
The order in which the levels of a factor appear is not insignificant. For
ordinal factors, simply specify the order of the levels when the factor is created
(Section 1.4.6, p. 18). In theory this order is not important for factors whose
categories are not ordinal. However, in certain methods such as analyses of
variance (see Worked Example 8.1, p. 157), the first level is used as a base level,
and the others are all compared to it. In such cases, specifying the first factor
is thus essential. Les us consider, for example, a qualitative variable with
levels 1, 2 or 3, depending on whether the individual received the classic,
new or placebo treatment. The data is as follows:
> X <- c(1,1,2,2,2,3)
character vector and then to convert it back to a factor. The first conversion
removes the notion of levels, and the second returns it, taking into account
the data vector. As category B no longer exists, it disappears:
> myfactor <- as.character(myfactor)
> myfactor <- factor(myfactor)
> myfactor
[1] A A A C C C C
Levels: A C
Sometimes, when a categorys sample size is too small, we break down the
individuals into other categories at random. This process, known as ventila-
tion (random allocation), is examined in Exercise 2.6.
Next, the rows of tab which correspond to the rows of miss with at least one
TRUE must be eliminated: here the rows 3, 4, 5 and 6. In order to do this,
for each row we need to know whether there is at least one TRUE. The any
function applied to a vector yields TRUE if there is at least one TRUE in the
vector. To apply this function to every row in the table, we use apply on the
rows (MARGIN=1):
[3,] 6 1
[4,] 4 2
[5,] 5 2
The first piece of missing data is in row 3 and column 1 ..., the fifth piece of
missing data is in row 5 and column 2.
Figure 2.1
Boxplot for a variable with outliers (denoted by arrows).
The two individuals above the higher whisker (see Figure 2.1) are often consid-
ered to be outliers. How can we determine their number (or other identifier)?
One solution is to use the result of the boxplot function. The component out
from the ouput list gives us the outliers values:
> results <- boxplot(kyphosis[,"Number"])
> outliers <- results$out
> outliers
[1] 9 10
The outlier values for the variable Number are therefore 9 and 10. We still need
to know the identifier for the individuals in question. In order to do this, we
search for individuals whose value corresponds to one of the values contained
Preparing Data 43
Y
X Y Z
Figure 2.3
Figure 2.2 Column concatenation: cbind(X,Y).
Row concatenation: rbind(X,Y).
X and Y must therefore have the same number of columns. However, the logic
is a little different, depending on whether we concatenate two matrices or two
data-frames. This question is illustrated in the example below.
Let us first construct a matrix with its row identifiers and variable names
X1 and X2:
> X <- matrix(c(1,2,3,4),2,2)
> rownames(X) <- paste("row",1:2,sep="")
> colnames(X) <- paste("X",1:2,sep="")
> X
X1 X2
row1 1 3
row2 2 4
We can see (on the third line) that the order of the variables is inconsistent.
In this case, the variables are re-classified prior to concatenation.
Concatenation by column is done in the same way as for the matrices or
data-frames. The cbind function does not check the row names. The names
from the first table are retained:
X Y Z
Figure 2.4
Merging by key: merge(X,Y,by="key").
Let us construct two data tables, one with more rows than the other. The
first table merges (column concatenation) a continuous variable (age) and two
qualitative variables in a data-frame. The cbind.data.frame function can be
used to specify that the result must be a data-frame:
46 R for Statistics
2.6 Cross-Tabulation
When two qualitative variables are analysed, we can present the data in two
ways: a classical table where an individual (a row) is described by the variables
(two variables), or a cross-tabulation (also known as a contingency table),
which gives the sample size for each confrontation of categories. Of course,
this can be generalised to more than two qualitative variables.
In order to get a clear idea of both types of table, let us first construct
two qualitative variables (wool and tension) collected from ten individuals.
The variable wool corresponds to the following three types of wool: Angora,
Merinos and Texel. The variable Tension indicates the value Low or High for
the wools tension resistance.
Preparing Data 47
> table(tab$tension,tab$wool)
Ang Mer Tex
High 1 0 4
Low 2 3 0
Another method is to use the column names from the data-frame and to
conduct the confrontation using a formula:
> crosstab <- xtabs(~tension+wool,data=tab)
> crosstab
wool
tension Ang Mer Tex
High 1 0 4
Low 2 3 0
Many functions in R are based on the hypothesis of an individuals
variables table, but the opposite operation must also be made possible for
situations in which the data are contained within a cross-tabulation. However,
an individuals variables table is not constructed immediately:
> tabframe <- as.data.frame(crosstab)
48 R for Statistics
> tabframe
tension wool Freq
1 High Ang 1
2 Low Ang 2
3 High Mer 0
4 Low Mer 3
5 High Tex 4
6 Low Tex 0
We obtain the frequencies for each combination rather than a row for each
individual. To reconstruct the initial table, see Exercise 2.8.
2.7 Exercises
Exercise 2.1 (Robust Reading of Data)
In some cases, reading data fails and it can be difficult to understand why. In
such cases, here is a more robust procedure which can be used to identify the
problem.
1. Read the file mydata.csv in a character vector myvector using the scan
function (in a complex reading, this step can also be used to check the column
separator).
2. Change the decimal separator from "," to "." (function gsub) in myvector.
It must be noted that this step can be conducted more simply using a spread-
sheet or word processor.
3. Construct a character matrix mymatrix of four rows and five columns which
contains the data and the names of the individuals and variables.
lines). The data presents a sample of friends with the age, gender (coded 0
for male and 1 for female) and the first date of skiing (year-month-day).
2. Read the file again with the option colClasses to specify that the gender
should be considered as a factor and the first time skiing as a date.
3. Create the variable yres1, which is the difference between yhat1 and
Rhamnos, and create the variable yres2, which is the difference between yhat3
and Arabinos. Add these variables to the data-frame.
Exercise 2.6 (Ventilation (allocation at random))
In the presence of the following qualitative variable,
> Xfac <- factor(c(rep("A",60),rep("B",20),rep("C",17),
rep("D",3)))
2. Display the names of the categories with sample sizes smaller than 5% of
the total sample size.
3. Calculate the frequencies of each category without the category or cate-
gories of the previous question. The result will be placed in the freq vector.
4. Select the individuals which take the category or categories of the Exer-
cise 2.6.2. Allocate them a value selected from the remaining categories, ac-
cording to a draw for which the probabilities are calculated in Exercise 2.6.3
(use the sample function). This process is known as ventilation2 .
Right in the face! This man is crazy! And I treat madmen every day; Ill write him a
prescription, and a strong one! Ill show him whos boss. Well find bits of him all around
Paris like pieces of a jigsaw puzzle... When it goes too far, I dont correct people any more,
I destroy them... Spread them out, ventilate them.... Bernard Blier in Crooks in Clover,
by Georges Lautner (1963), script by Michel Audiard.
50 R for Statistics
2. From this table, create a character matrix tabmat with three columns and
as many rows as confrontations of categories (or cells in the contingency table).
This character matrix will be filled row by row with tension (rows from the
previous table), wool type (columns from the previous table), and the sample
size for the confrontation of categories. In order to do so, we use the functions
matrix and rep.
3. Convert the character matrix from the previous question to a data-frame;
name it tabframe, for example, and check the variable types. Using this data-
frame, and assign the total number of individuals to n (sum) and the number
of qualitative variables to nbfac (ncol).
4. Create a counter iter set at 1 and a character matrix tabcomplete, for
example the character "", the same size as the final dataset.
5. Make a loop around the number of rows of tabmat (see Section 4.1.2,
p. 87). Each row i corresponds to a confrontation of categories (or strata). If
the number of individuals with this confrontation of categories i is not null,
then repeat in as many rows as necessary in tabcomplete, the confrontation
of category i (by distributing the categories in the corresponding columns).
It is possible to use a loop, the iter counter, the tabmat matrix and the
tabframe data-frame.
The result tabcomplete will be the same as the initial table.
3
R Graphics
Graphs are often the starting point for statistical analysis. One of the main
advantages of R is how easy it is for the user to create many different kinds of
graphs. We begin this chapter by studying conventional graphs, followed by
an examination of some more complex representations. This final part, using
the lattice package, can be omitted if using R for the first time.
Wx12
Min. :-7.878
1st Qu.:-3.565
Median :-1.879
Mean :-1.611
3rd Qu.: 0.000
Max. : 6.578
The variables T12, maxO3 and Wx12 are continuous quantitative variables (nu-
merics), whereas wind and rain are factors.
0.5
1.0
gridx
Figure 3.1
Scatterplot for the function x 7 sin(2x).
120
100
80
60
40
15 20 25 30
ozone[, "T12"]
Figure 3.2
Scatterplot (T12, max03).
As the two variables are contained and named within the same table, a simpler
syntax can be used, which automatically inserts the variables as labels for the
axes (Figure 3.3):
> plot(maxO3~T12,data=ozone)
160
140
120
maxO3
100
80
60
40
15 20 25 30
T12
Figure 3.3
Scatterplot (T12, maxO3) with explicit axis labels.
54 R for Statistics
This graphical representation could also have been obtained using the com-
mand:
> plot(ozone[,"max03"]~ozone[,"T12"],xlab="T12",ylab="maxO3")
140
120
100
80
60
40
Wind directions
Figure 3.4
Boxplots of max03 according to wind.
In this case, the plot function yields a boxplot for each category of the variable
wind (Figure 3.4). In this graph we can see if the variable wind has an
effect on the ozone. Here, the easterly wind seems to be associated with high
concentrations of ozone. This graph can also be obtained using the boxplot
function:
> boxplot(maxO3~wind,data=ozone)
Given the nature of the variables, the plot function chose to use the boxplot
function as it considers it the most relevant.
We can also represent two qualitative variables using a bar chart. For
example,
> plot(rain~wind,data=ozone)
can be used to obtain, for each category of the explanatory factor (here wind),
the relative frequencies for each category of the response factor (here rain).
The width of the bar is proportional to the frequency of the category of the
R Graphics 55
explanatory factor (here wind). This graph can also be obtained using the
spineplot function.
1.0
Rainy
0.8
0.6
rain
Dry
0.4
0.2
0.0
East North South West
wind
Figure 3.5
Bar chart for rain according to wind.
In Figure 3.5, we find that the weather type rain is proportionally greater
when the wind blows from the west (which indeed seems to be the case in
Rennes). Furthermore, as this is the widest bar, we can also conclude that
this is the most common wind direction in Rennes.
Finally, it is possible to represent a qualitative variable according to a
quantitative variable, here wind according to T12:
> plot(wind~T12,data=ozone)
1.0
West
0.8
South
0.6
wind
North
0.4
0.2
East
0.0
14 18 20 22 24 28 32
T12
Figure 3.6
Bar chart for wind according to T12.
56 R for Statistics
We here obtain a bar chart (Figure 3.6). On the abscissa, the quantitative
variable is divided into classes in the same default manner as a histogram
(hist). In each of these classes, the relative frequencies of each category of
wind are calculated, giving the height of each bar. The colour corresponds to
the category. This graph can also be obtained automatically using
> spineplot(wind~T12,data=ozone)
This bar chart is difficult to interpret. It is possible to return to the first
formulation of the plot function to represent the scatterplot (T12,wind); see
Figure 3.7:
> plot(ozone[,"T12"],ozone[,"wind"],yaxt="n",
xlab="T12",ylab="wind",pch="+")
> axis(side=2,at=1:4,labels=levels(ozone[,"wind"]))
It must here be noted that the graph must be reworked: The argument
yaxt="n" deletes the scale from the y axis, which here takes the different
categories of the qualitative variable converted to numerics by default: 1 to
4. The axis function is therefore used to rename the scale.
West
South
wind
North
East
15 20 25 30
T12
Figure 3.7
Graph of the qualitative variable wind according to the quantitative variable
T12.
This type of graph is interesting when there are only a few measurements
for each category. If not, the data should be represented using a boxplot (see
Figure 3.4).
Even if, intrinsically, the plot function expects the two input arguments
x and y for the abscissa and ordinate, respectively, it is possible to specify
only one argument. For example, for one quantitative variable, R draws this
variable sequentially with the observation number on the abscissa axis (see
Figure 3.8).
R Graphics 57
> plot(ozone[,"maxO3"],xlab="num.",ylab="maxO3",cex=.5,pch=16)
160
140
120
maxO3
100
80
60
40
0 20 40 60 80 100
num.
Figure 3.8
Scatterplot for ozone peaks (maxO3) according to index.
This command yields an image where, by default, the abscissa axis carries the
ordinal continuation 1, , n known as index (where n is the number of ob-
servations). The size of the symbol can be controlled using the argument cex
which manages the increase (or decrease) factor for size (by default cex=1).
The argument pch is used to specify the shape of the points. This argument
accepts numerical values between 0 and 25 (see Figure 3.9) or a character
directly rewritten on the screen.
21 22 23 24 25
16 17 18 19 20
11 12 13 14 15
6 7 8 9 10
1 2 3 4 5
Figure 3.9
Symbol obtained for the value of the argument pch.
It is also possible to modify the type of line drawn using the argument
type: "p" to only draw the points (default option), "l" (line) to link them,
"b" or "o" to do both. The argument type can also be used to draw vertical
58 R for Statistics
bars ("h", high-density) or steps after the points ("s", step) or before ("S",
step). A graphical illustration of these arguments is given on Figure 3.10.
p l b h s S
Figure 3.10
Argument type for the values "p", "l", "b", "h", "s", "S".
The evolution of ozone maxima can be obtained using the argument type="l"
(Figure 3.11):
> plot(ozone[,"maxO3"],xlab="num.",ylab="maxO3",type="l")
160
140
120
maxO3
100
80
60
40
0 20 40 60 80 100
num.
Figure 3.11
Evolution of ozone peaks (maxO3) during the summer of 2001.
solutions. The histogram for the variable maxO3 (Figure 3.12) estimates the
density if we specify the argument prob=TRUE of the hist function:
> hist(ozone[,"maxO3"],main="Histogram",prob=TRUE,xlab="Ozone")
Histogram
0.015
Density
0.010
0.005
0.000
Ozone
Figure 3.12
Histogram for ozone.
A kernel density estimate of the variable maxO3 can be obtained using the
density function. The graphical representation (Figure 3.13) is obtained as
follows:
0.005
0.000
Ozone
Figure 3.13
Kernel density estimate for ozone.
60 R for Statistics
Figure 3.14
Bar chart for a qualitative variable.
It is also possible to use the plot function directly on the vector of the
categories:
> plot(ozone[,"wind"])
Another method, but which is not recommended, is circular representation,
or a pie chart, which can be obtained using the pie function.
> plot(maxO3~T12,data=ozone,type="n")
> text(ozone[,"T12"],ozone[,"maxO3"],substr(rownames(ozone),5,8),
cex=.75)
160 0824
0727 0729 0825
0730
0726
0725
0728
0620 0623 0625
140
0703
0823
0801
120
06220624
0802 0822 0814
0707 0604 0821
0828 0827
maxO3
0815 0812
0612 0626 0702
100
0829 0613
0611 0826
0928
0922
0921 0811
0706
0705
0923
0603 0605
07190614
06180720 0709
0601
0925
0617 0602 09290716 0724
0627 0731
0621
0723 0629
80
15 20 25 30
T12
Figure 3.15
Text positioned via the points coordinates.
The subsequent graph (Figure 3.15) does not show the exact position of
the name. It is therefore recommended to plot crosses on the graph and then
to add text above these crosses. We thus draw these crosses on the coordinates
(plot function), and then add the text on top of that (pos=3) offset by 0.3
(offset):
> plot(maxO3~T12,data=ozone,type="p",pch=3,cex=.75)
> text(ozone[,"T12"],ozone[,"maxO3"],
substr(rownames(ozone),5,8),cex=.75,pos=3,offset=.3)
160
0730
0726
0725
0728
0620 0623 0625
0703
140
0823
0801
06220624
120
0802 0822 0814
0707 0604 0821
0828 0827
0815 0812
0612 0626
maxO3
0702
0829 0613 0611 0826
0928
100
0922
0921 0811
0706
0705 0605
0603
0923 0719 0614 0709
0618
0601 0720
0925
06170602 0929 0716
0724
0627 0731
0607 0610 0704 0606 0621
0723 0629
0803 0901
08100927
0710 0912
80
0924 0830
0913 0813
0820 0902
0920
0916
0914 0615
0904 0708
07110807
0722
0718 0809 0616
0915 0930
0907 0701
0919 0714
0715 0819 0630
0905
0908
0721 0806
0804
0717
0917
0906 0831 0808
0628
60
0712 0805
0903
0713
0918
40
15 20 25 30
T12
Figure 3.16
Scatterplot with the rainy days in grey.
From this graph we can see that it does not rain when the temperature exceeds
27 degrees. This limit can be marked using the abline function and adding
the command (Figure 3.17):
> abline(v=27,lty=2)
0824
0727 0729 0825
160
0730
0726
0725
0728
0620 0623 0625
0703
140
0823
0801
06220624
120
0702
0829 0613 0611 0826
0928
100
0922
0921 0811
0706
0705 0605
0603
0923 0719
060106180614
0720 0709
0925
0617 0716
0929
0602 0606
0724
0627 0731
0607 0610 07040810 0621
0723 0629
0803 0901
0927
0710 0912
80
0924 0830
0913 0813
0820 0902
0916
0914
08090920
0616 0615 0711
0904
0930 0708
0807
0722
0701
0718 0915
0907
0714
0715 0819
0630
0919 0905
0908
0721 0806
0717
0804
0917
0906 0831 0808
0628
60
0712 0805
0903
0713
0918
40
15 20 25 30
T12
Figure 3.17
Use of the abline function.
We can also add a horizontal line using abline(h= ) or any type of line (by
specifying the intercept and the slope using abline(c(intercept,slope)).
R Graphics 63
The lty (line type) setting is used to control the type of line (1: continuous
line, 2: dashed, 3: dotted, etc.).
As its name indicates, the abline function adds a straight line onto the
existing graph. If we want to add two broken lines to compare, for example,
the evolution of maximum ozone levels over two different weeks, we use the
lines function. Let us compare the evolution of ozone levels during the first
two weeks:
> plot(ozone[1:7,"maxO3"],type="l")
> lines(ozone[8:14,"maxO3"],col="grey50") # add lines (2nd week)
110
ozone[1:7, "maxO3"]
100
95
90
85
80
1 2 3 4 5 6 7
Index
Figure 3.18
Adding a grey line using lines.
The graph (Figure 3.18) does not show the sixth and seventh observations for
this second week. Indeed, if we want to draw a line on the graph, we need to
know the minimum and maximum for each axis in order to be able to prepare
them and to prepare the scales. Here, the minimum for the second line is
less than that of the first. The graph is scaled using the plot function. When
called upon, this function is only aware of the information concerning the
plot instruction. Scaling is not automatic following a lines (or points) order.
It is therefore important, right from the plot order, to specify the minimum
and maximum for both weeks. Simply give the ylim argument the results
of the range function. The latter yields the minimum and maximum. This
argument tells the plot function between which ordinates the graph is situated
(Figure 3.19).
> rangey <- range(ozone[1:7,"maxO3"],ozone[8:14,"maxO3"])
> plot(ozone[1:7,"maxO3"],type="l",ylim=rangey,lty=1)
> lines(ozone[8:14,"maxO3"],col="grey50",lty=1)
64 R for Statistics
110
ozone[1:7, "maxO3"]
100
90
80
70
1 2 3 4 5 6 7
Index
Figure 3.19
Adding a grey line using lines and scaling the ordinates using the ylim argu-
ment.
Of course, if the abscissas are the same, you do not need to specify the mini-
mum and maximum on the abscissa.
y
x
Figure 3.20
Example of use of the persp function.
> library(lattice)
> cloud(maxO3~T12+Wx12,type=c("p","h"),data=ozone)
Here, the lattice package enables the user to apply two arguments for type,
which is not possible in conventional graphs.
66 R for Statistics
maxO3
Wx12 T12
Figure 3.21
3D graph characterised by measurements of the quantitative variables maxO3,
T12 and Wx12.
> pdf("graphik.pdf")
> rangey <- range(ozone[1:7,"maxO3"],ozone[8:14,"maxO3"])
> plot(ozone[1:7,"maxO3"],type="l",ylim=rangey,lty=1)
> lines(ozone[8:14,"maxO3"],col="grey50",lty=1)
> dev.off()
The dev.off function closes the device and finalises the file graphik.pdf which
will be found in the current working directory (see Section 1.2.1 or 1.2.2
or 1.2.3). It is of course possible to specify the destination pathway using
pdf("mypath/graphik.pdf").
There are a number of different options when exporting graphs (paper size,
choice of fonts, etc.). These options all depend on the export format chosen
and can be consulted in the help section. Not all the devices are necessarily
available: they depend on the platform used, the compilation options and the
packages installed.
If we want to modify a graph (move a legend, text, increase the size of
a given element, etc.), vector graphics software is required. The graph must
therefore be exported (format svg, emf or xfig, for example) and then altered
using the appropriate software. It must be noted that inkscape and oodraw
(from openoffice) are freely available open source software which import graphs
in svg format. In Mac OS X and Unix/Linux, xfig is another alternative to
these two pieces of software.
> par(mfrow=c(1,2))
> plot(1:10,10:1,pch=0)
> plot(rep(1,4),type="l")
1.4
10
8
1.2
rep(1, 4)
6
10:1
1.0
4
0.8
2
1:10 Index
Figure 3.22
Graphs arranged in 1 row 2 columns using par(mfrow=c(1,2)).
To come back to one graph per window, simply kill the graph window or use
the command par(mfrow=c(1,1)).
Sometimes, we need to use graphs of different sizes. In such cases, we
use the layout function: this function divides the graph window into areas
before distributing them. As its argument, it therefore admits a nrow ncol
matrix, which creates nrow ncol number of individual tiles. The values of
the matrix therefore correspond to the numbers of the graphs featured in each
tile. For example, to set out three graphs on two rows, as in Figure 3.23, we
first create the following matrix:
1 1
2 3
> mat <- matrix(c(1,1,2,3), nrow=2, ncol=2, byrow = TRUE)
2 3
Figure 3.23
Positioning three graphs on two rows using layout.
R Graphics 69
4
2
2 4 6 8 10
1:10
1.4
3
1.2
2
c(2, 3, 1, 0)
rep(1, 4)
1.0
1
0.8
0
0.6
1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0
Index Index
Figure 3.24
Three graphs on two rows using layout.
It is also possible to specify the height and width of the columns (see help for
layout and Exercise 3.8). It must be noted that the graphs might be (too) far
apart, but this can easily be resolved (see Exercise 3.8).
The first graph is thus drawn. If no graph window was open prior to the
creation of the first graph, it will be created automatically. A second window
is then opened, thus rendering the first inactive. The second graph is drawn
in this new window. Subsequent graphs will be drawn in the active window.
> plot(x,y,type="p",pch=o,col="red")
> par(fg="blue",bg="#f2a1c2")
To add a title:
To control the appearance of the axes or to delete them (and the corre-
sponding legend):
> plot(c(10,11),c(12,11),type="p",axes=FALSE,xlab="",ylab="")
> plot(x,y,type="p",asp=1)
R Graphics 71
To add a legend:
Argument Description
adj Controls the position of the text in relation to the left
edge of the text: 0 to the left, 0.5 centred, 1 to the
right; if two values are given c(x,y), the text will be
horizontally and vertically justified
asp Specifies the ratio between y and x: asp=1 constructs
orthonormal graphs
axes By default TRUE, the axes and the frame are shown
bg Specifies the background colour 1, 2, or a colour chosen
from colors()
bty Controls the frame line, permitted values: o,l, 7,
c, u or ] (with the shape of frame resembling that
of the character); bty="n" deletes the frame
cex Controls the size of the characters and symbols compared
to the default value which is 1
cex.axis Controls the size of the characters on the scales of the
axes
cex.lab Controls the size of the characters for the axis labels
cex.main Controls the size of the characters of the title
cex.sub Controls the size of the characters in the caption
col Specifies the colour of the graph: possible values 1, 2, or
a colour chosen from colors()
col.axis Specifies the colour of the axes
col.main Specifies the colour of the title
font Controls the style of the text (1: normal, 2: italics, 3:
bold, 4: bold italics)
font.axis Controls the style for the scales
font.lab Controls the style for the axis labels
font.main Controls the style of the title
font.sub Controls the style of the caption
72 R for Statistics
Argument Description
las Controls the distribution of annotations on the axes (0,
default value: parallel to the axes, 1: horizontal, 2: per-
pendicular to the axes, 3: vertical)
lty Controls the type of line drawn (1: continuous, 2:
dashed, 3: dotted, 4: alternately dotted and dashed,
5: long dashes, 6: alternately long and short dashes), or
write in words solid, dashed, dotted, dotdash,
longdash, twodash or blank (to write nothing)
lwd Controls the thickness of the lines
main Specifies the title of the graph, for example main="My
Title"
mfrow Vector c(nr,nc) which divides the graph window into
nr rows and nc columns; the graphs are then drawn in a
line
offset Specifies the position of the text with respect to the point
position (text function)
pch Integer (between 0 and 25) which controls the type of
symbol (or potentially any character in speech marks)
pos Specifies the position of the text; permitted values 1, 2,
3 and 4 (text function)
ps Integer controlling the size of the text and symbols in
points
sub Specifies the caption of the graph, for example sub="My
Caption"
tck, tcl Specifies the distances between graduations on the axes
type Specifies the type of graph drawn: permitted values
"n","p","l","b,"h","s","S"
xlim, ylim Specifies the limits of the axes, for example
xlim=c(0,10)
xlab, ylab Specifies the annotations on the axes, for example
xlab="Abscissa"
Function Description
barplot(x) Draws a bar chart for the values of x
boxplot(x) Draws a boxplot for x
contour(x,y,z) Draws contour lines, see also
filled.contour(x,y,z)
coplot(x~y|f1) Draws the bivariate plot of (x, y) for each value
of f1 (or for each small interval for the values of
f1)
filled.contour(x, Draws contour lines, but the areas between the
y,z) contours are coloured, see also image(x,y,z)
R Graphics 73
Function Description
hist(x,prob=TRUE) Draws a histogram for x
image(x,y,z) Draws rectangles at the coordinates x, y coloured
according to z, see also contour(x,y,z)
interaction.plot(f1, Draws the graph for the averages of x according
f2,x,fun=mean) to the values of f1 (on the abscissa axis) and f2
(one broken line per category of f2)
matplot(x,y) Draws the bivariate plot for the first column of x
confronted with the first column of y, the second
column of x with the second column of y, etc.
pairs(x) If x is a matrix or data-frame, draws all the bi-
variate plots between the columns of x
persp(x,y,z) Draws a response surface in 3D, see demo(persp)
pie(x) Draws a pie chart
plot(objet) Draws a graph corresponding to the class of
objet
plot(x,y) Draws y against x
qqnorm(x) Draws the quantiles of x in relation to those ex-
pected from a normal distribution
qqplot(x,y) Draws the quantiles of y in relation to those of x
spineplot(f1,f2) Draws the bar chart corresponding to f1 and f2
stripplot(x) Draws the graph for the values of x on a line
sunflowerplot(x,y) Idem, but the overlapped points are drawn in the
shape of flowers with the number of petals corre-
sponding to the number of points
symbols(x,y,...) For the coordinates x and y, draws different sym-
bols (stars, circles, boxplots, etc.)
provides information about the location of the trench. The most natural way
of representing this is to feature the longitude on the abscissa, the latitude
on the ordinate, and to plot one point for each earthquake. As it is probable
that the location will affect the depth of the earthquake, we have chosen to
represent a symbol with a colour which darkens as the depth increases. To do
this, we divide the depth variable into six classes (see Section 2.3.2, p. 35):
> library(lattice)
> data(quakes)
> breakpts <- quantile(quakes$depth,probs=seq(0,1,length=7))
> class <- cut(quakes$depth,breaks=breakpts,include.lowest=TRUE)
> levels(class) <- c("grey80","grey65","grey50","grey35",
"grey10","black")
> plot(lat ~ long,data=quakes,col=as.character(class),pch=16)
Quakes seem therefore to be distributed according to two planes (see Fig-
ure 3.25). We can represent them according to six depth levels using the
10
15
20
lat
25
30
35
long
Figure 3.25
Latitude and longitude of earthquakes near Fiji.
functions xyplot and equal.count. The sample sizes for each level are equal
and the intervals overlap: 5% of the data are common to two consecutive
intervals (overlap=0.05).
> Depth <- equal.count(quakes$depth,number = 6,overlap=0.05)
> xyplot(lat ~ long | Depth, data = quakes,pch="+",
xlab = "Longitude",ylab = "Latitude")
The depth interval is given in the strip above each panel of the overall graph
(Figure 3.26).
R Graphics 75
165 170 175 180 185
165 170 175 180 185 165 170 175 180 185
Longitude
Figure 3.26
Latitude and longitude of earthquakes depending on depth.
The abscissa and ordinate axes are identical from one panel to the next.
A panel is a graph of one variable (ordinate) in relation to another (ab-
scissa) conditioned by one (or multiple) other variable(s). The conditional
value appears in the strip for each panel.
Lattice type graphs are similar to normal graphs but they can be used
to divide groups of variables. Table 3.2 illustrates the correspondence between
conventional and lattice graphs.
Conditioning relies on a qualitative variable or on the way a continuous
variable is partitioned. In the latter case, the resulting object is referred to
as shingles. This object is created using the equal.count function and can
be represented graphically (Figure 3.27) using the generic plot function:
> plot(Depth)
76 R for Statistics
TABLE 3.2
Correspondence between Conventional and Lattice Graphs
Lattice Description Conventional
barchart Bar chart barplot
bwplot Boxplot boxplot
densityplot() Estimation of density plot(density())
histogram Histogram hist
qqmath Normal QQ plot qqnorm
qq Quantiles - quantiles (2 samples) qqplot
xyplot Scatterplot plot
levelplot Greyscale (3D) image
contourplot Contour lines (3D) contour
cloud Scatterplot (3D) none
wireframe Perspective (3D) persp
splom Matrix of scatterplot pairs
parallel Diagonal none
6
5
Panel
4
3
2
1
Range
Figure 3.27
Shingles of earthquake depth.
yaxis~xaxis|factor
yaxis~xaxis|factor1*factor2
yaxis~xaxis|shingles*factor
can be used to plot the points for which the coordinates are contained within
xaxis and yaxis. This will condition, according to a) the categories of
R Graphics 77
10
5
Murder
Northeast South
15
10
Population
Figure 3.28
Crime rates and population size by region.
10
10
10
Population
Figure 3.29
Crime rates and population size by region and class of income.
income
15
10
Northeast
South
Murder
10
Population
Figure 3.30
Crime rates and population size by class of income using postscript theme
(see Section 3.2.4, p. 81).
The mypanel function may seem strange, but the ... are indeed part of the
function, and are not arguments to be detailed.
income
15
10
5
Murder
income income
15
10
Population
Figure 3.31
Crime rates according to population size by class of income: smoothing and
data representation.
5 0 5 10 15 20
0.10
0.08
Density
0.06
0.04
0.02
0.00
5 0 5 10 15 20 5 0 5 10 15 20
Murder
Figure 3.32
Crime rates according to population size by class of income: smoothing and
data representation.
3.2.4 Exportation
To export lattice graphs, we use the same procedure as for conventional
graphs (see Section 3.1.5, p. 66). It is sometimes necessary to export a graph
in black and white. In such cases, we recommend that you replace the colours
with different symbols. This tedious step is actually managed automatically
using the trellis.device function. This function is used to manage the themes.
82 R for Statistics
To export a black and white lattice graph in pdf format, simply use
> trellis.device(device="pdf",theme = standard.theme("postscript"),
file="export.pdf")
> my lattice expression(s)
> dev.off()
3.3 Exercises
Exercise 3.1 (Draw a function)
1. Draw the sine function between 0 and 2 (use pi).
2. Add the following title (title): plot of sine function.
2. Plot the scatterplot maxO3 according to T12, with lines connecting the
points.
R Graphics 83
160
140
120
maxO3
100
80
60
40
15 20 25 30
T12
Figure 3.33
Scatterplot maxO3 according to T12.
2. Create a qualitative variable thirty with a value of 1 for the first year
(1749), increasing by 1 every thirty years. In order to do so, round up the
result (floor) of the division by 30 (or the division integer).
84 R for Statistics
3. Enter the colour vector which contains the following colours: green, yel-
low, magenta, orange, cyan, grey, red, green and blue. Automatically check
that these colours are indeed contained within the colors() vector (instruc-
tions %in% and all).
4. Draw the time series as seen in Figure 3.34. Use the palette function, plot,
lines and a loop (see also unique).
+
250
+ +
+
++
+ + ++
200
++ +++ ++
+ + +
Number of sunspots
+ + + + +
++
+
++ + + ++ ++
++
+ ++ + + + + + ++
++
+ ++++
+
+ ++ ++ + ++ + + +
+ + ++ + ++ +
150
+ + + ++ ++ + + ++
+ +++ ++
++ + ++
++ ++ + +
+ + ++++
+ ++ + ++ +
++
+ + +
+ + + + + ++
+++ ++ +++ +
+
+ + +
+ ++++
++
++ + ++ + +++ + ++ + ++ + + ++ +
+
+ +++ +++ ++ ++ ++ +++ +
++ + ++ ++ ++++ ++
++ + +++++ + ++ + + + + ++ + + +++ ++
+
+
++ + ++
+ + ++++ + ++
++++
++ + +++ ++ + ++ + +++ ++ + +++ + + + ++ + + + + +
+ +++
100
+
++ +
+ + +++ + + + ++ + ++++ ++ + +
++ + + + +
++
+ +++ +++ + ++ + +
+ ++ + +
++ +
+
+
+
+
+
++ + +
++ ++ ++++ + + ++ + ++ +++
++
++ ++ ++ + + ++ +++ +++ +++ + + + + +++ +
+++ + ++
++ ++ + ++
+++ ++ + ++
+ + +
++
++ ++ +
+
++ +
+
+
+ +
++
+
+ ++ + +++ ++ + ++
++ + ++
+ ++++ +
+
+
+ + +++ ++ ++ ++ + +
++ +
+
+++ + ++++
+
+ + ++++++ +
++ + +
+ +
++++ + +
+
+ ++ + + ++ + + + + +
++
+ + + ++ +++
+ ++ + + + +++ +++ + + + + + + +
+++
+ ++ +++ + +++ ++ + + ++ +++++++++ + ++
+
+ ++
++
++++
+ ++
++ +++
+ +++ +++
++++ + ++++++ +++ +++ + +++ ++ + ++ +++
++ +++
++ +++ + +++ +++++++ ++
+
+ +
++ +++++++
++ +
++ ++
++++ ++ ++ +
+++ ++
++
+ ++
+ ++++++ ++ +
++ ++ ++ +
++++ +++++
+ +
++
++ +++
++ ++ ++++ +++++++ ++ +++ +++ + ++ +
+ ++
+ ++ ++ ++ ++ + + + +++ + ++ + ++ ++ ++ + + ++++
50
++ +
+ +
+ + + + + +
+ +
+++ + + +
+ +
+ + + +++ ++
+++ + +
+ +
+ +
++ + + ++ + ++ + + + + + +++ + +
+ + ++
+ + +
++ +
+ ++
++++
+++ +
+++ +++ + + + +
++ + +
+++++ + + ++
+ +++ ++++ + + +
+ ++ + +
+ + +++ + + +
++ + + + ++++++ +
++ ++++++ ++++ + +
+
++++ ++ + ++ ++ ++ +
+
+ +++
++ +
+++ ++ ++++ + ++ + +++ ++
++
+ + ++
++ ++
++ ++ ++ ++
+++ ++ ++ ++ ++++ +++++ ++++ ++ +++ ++ + ++ +
+
+ ++
++++
+ ++++ ++ ++ ++ + +++ ++
+
++++
++ ++ +++
+++
+
++
+ +
++++++ +++ +
+
+ ++ ++ +
+ + ++ +
+ ++
++ +
++ +
+++ ++ ++++
+ ++++ ++++ + ++
+ ++ ++ + +
++++ ++
+
++ ++ +++ ++ +++ + ++++
+++
+ +
+ + ++ +
+++ +
++
+ +++
+++++
+ ++ +++
++
+
+ +++++ +
++
++ ++
+ + +
++ +++
+++ ++ + +++
+++ ++ ++++++ + +
+ ++++ ++++ + +++
++++ ++++ ++ ++
++
+
+++ +
+
+
+
+
+
+++
+
+
+++
+
++ + + +++ + + +
+++ +
+
+++
++
++
+++++ +++++
++
+ +++
+++++ ++ +
++ + +
++ + + + +++
+
++ +
+ ++ ++ +++
++++ +
+
++
+
++ ++
+ +++ +
++
+ +++ ++ + +
++ +
+ ++ + + ++ ++
+++
+
0
Time
Figure 3.34
Sunspots according to observation number.
A 5%
0.2
0.1
0.0
3 2 1 0 1 2 3
Figure 3.35
Density of a normal distribution.
2 4 6 8 10
1.4
1:10
1.2
c(2, 3, 1, 0)
2
1.0
1
0.8
0
0.6
1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0
Figure 3.36
Three graphs on two rows.
2. Reproduce the same graph as above but with a graph at the bottom right
with a width of 1 for 4 compared with that at the bottom left (see the argu-
ments for layout).
4. Use the lattice package to draw the scatterplot maxO3 according to T12,
with lines connecting the points (type "a"). Compare this line to that in
Figure 3.33 (Exercise 3.3, p. 83). What changed using the type="a" option
specific to the lattice package?
2. Concatenate the table according to the rows so that the table appears six
times.
3. Construct an ordinal factor for which the observations are 3 times "h", 3
times "b", 3 times "p", 3 times "l", 3 times "s" and 3 times "S". The order
of levels will be "p", "l", "b", "h", "s" and "S".
4. Add this ordinal factor to the data of question 2.
5. Reproduce graph in Figure 3.10 (p. 58) with xyplot and the appropriate
function panel (see subscripts in the help section of xyplot).
6. Reproduce the same graph but instead of using an ordinal factor, use a
non-ordinal (classical) factor and observe the difference.
4
Making Programs with R
Programming in R is based on the same principles as for other software for sci-
entific calculations. It therefore uses familiar programming structures (loops,
condition if else, etc.) as well as predefined functions specific to statistical
practices.
The i index takes as its values all the coordinates of the chosen vector. If we
want to display only odd integers, simply create a vector which starts at 1
88 R for Statistics
and goes up to 99, at intervals of two. The seq function is used to create just
such a vector. The loop becomes
> for (i in seq(1,99,by=2)) print(i)
This method can easily be generalised to any vector. Thus, if we choose a
character vector which represents the first three days of the week, we obtain
> vector <- c("Monday","Tuesday","Wednesday")
> for (i in vector) print(i)
[1] "Monday"
[1] "Tuesday"
[1] "Wednesday"
In general there are multiple orders to be executed for each iteration. To
do so, the commands must be grouped together. Generally speaking, the for
loop is expressed:
> for (i in vector) {
+ expr1
+ expr2
+ ...
+ }
Another possibility of a loop is the while condition. Its general syntax is
as follows:
> while (condition) {
+ expr1
+ expr2
+ ...
+ }
The orders expr1, expr2, etc. are executed as long as the condition is true,
and this is evaluated at the beginning of the loop. As soon as the condition
is found to be false, the loop is stopped. Thus,
> i <- 1
> while (i<3) {
+ print(i)
+ i <- i+1
+ }
[1] 1
[1] 2
is used to display i and to increase it by 1 when i is less than 3.
One final possibility for a loop is the order repeat. It is understood as:
repeat the orders indefinitely. To ensure the loop is stopped, we use the order
break. This order can be used for any loop. An example is given in the
following subsection.
Making Programs with R 89
> if (condition) {
+ expr1
+ expr2
+ ...
+ } else {
+ expr3
+ expr4
+ ...
+ }
Be aware that the order else must be on the same line as the } closing
bracket of the if clause (as above).
90 R for Statistics
> set.seed(1234)
> Y <- array(sample(24),dim=c(4,3,2))
> Y
, , 1
[,1] [,2] [,3]
[1,] 3 18 11
[2,] 15 13 8
[3,] 14 1 10
[4,] 22 4 23
, , 2
[,1] [,2] [,3]
[1,] 17 21 5
[2,] 16 2 20
[3,] 24 7 9
[4,] 19 6 12
> apply(Y,MARGIN=c(1,2),FUN=sum,na.rm=TRUE)
[,1] [,2] [,3]
[1,] 20 39 16
[2,] 31 15 28
[3,] 38 8 19
[4,] 41 10 35
Here we have used the R functions mean and sum, but we could just as easily
have used functions which we programmed in advance. For example,
Many functions are based on the same principle as the apply function. For
example, the tapply function applies one same function, not to the margins
of the table but instead at each level of a factor or combination of factors:
> Z <- 1:5
> Z
[1] 1 2 3 4 5
> vec1 <- c(rep("A1",2),rep("A2",2),rep("A3",1))
> vec1
[1] "A1" "A1" "A2" "A2" "A3"
> vec2 <- c(rep("B1",3),rep("B2",2))
> vec2
[1] "B1" "B1" "B1" "B2" "B2"
> tapply(Z,vec1,sum)
A1 A2 A3
3 7 5
> tapply(Z,list(vec1,vec2),sum)
B1 B2
A1 3 NA
A2 3 4
A3 NA 5
lapply, or its equivalent, sapply, applies one same function to every element
in a list. The difference between these two functions is as follows: The lapply
function by default yields a list when the sapply function yields a matrix
or a vector. Let us therefore create a list containing two matrices and then
calculate the mean of each element of the list; here is each matrix:
> set.seed(545)
> mat1 <- matrix(sample(12),ncol=4)
> mat1
[,1] [,2] [,3] [,4]
[1,] 9 4 1 11
[2,] 10 2 3 6
[3,] 7 5 8 12
> mat2 <- matrix(sample(4),ncol=2)
> mat2
[,1] [,2]
[1,] 4 3
[2,] 2 1
> mylist <- list(matrix1=mat1,matrix2=mat2)
> lapply(mylist,mean)
$matrix1
[1] 6.5
$matrix2
[1] 2.5
Making Programs with R 93
It is even possible to calculate the sum by column of each element of the list
using the apply functions as FUN function of the lapply function:
> lapply(mylist,apply,2,sum,na.rm=T)
$matrix1
[1] 26 11 12 29
$matrix2
[1] 6 4
The aggregate function works on data-frames. It separates the data into
sub-groups defined by a vector, and calculates a statistic for all the variables
of the data-frame for each sub-group. Let us reexamine the data which we
generated earlier and create a data-frame with two variables Z and T:
> Z <- 1:5
> T <- 5:1
> vec1 <- c(rep("A1",2),rep("A2",2),rep("A3",1))
> vec2 <- c(rep("B1",3),rep("B2",2))
> df <- data.frame(Z,T,vec1,vec2)
> df
Z T vec1 vec2
1 1 5 A1 B1
2 2 4 A1 B1
3 3 3 A2 B1
4 4 2 A2 B2
5 5 1 A3 B2
> aggregate(df[,1:2],list(FactorA=vec1),sum)
FactorA Z T
1 A1 3 9
2 A2 7 5
3 A3 5 1
Sub-groups can also be defined by a vector generated by two factors:
> aggregate(df[,1:2],list(FactorA=vec1,FactorB=vec2),sum)
FactorA FactorB Z T
1 A1 B1 3 9
2 A2 B1 3 3
3 A2 B2 4 2
4 A3 B2 5 1
The sweep function is used to apply one single procedure to all the margins
in a table. For example, if we want to centre and standardise the columns of
a matrix X, we write
> set.seed(1234)
> X <- matrix(sample(12),ncol=3)
94 R for Statistics
> X
[,1] [,2] [,3]
[1,] 2 10 3
[2,] 7 5 8
[3,] 11 1 4
[4,] 6 12 9
> mean.X <- apply(X,2,mean)
> mean.X
[1] 6.5 7.0 6.0
> sd.X <- apply(X,2,sd)
> sd.X
[1] 3.696846 4.966555 2.943920
> Xc <- sweep(X,2,mean.X,FUN="-")
> Xc
[,1] [,2] [,3]
[1,] -4.5 3 -3
[2,] 0.5 -2 2
[3,] 4.5 -6 -2
[4,] -0.5 5 3
> Xcr <- sweep(Xc,2,sd.X,FUN="/")
> Xcr
[,1] [,2] [,3]
[1,] -1.2172540 0.6040404 -1.0190493
[2,] 0.1352504 -0.4026936 0.6793662
[3,] 1.2172540 -1.2080809 -0.6793662
[4,] -0.1352504 1.0067341 1.0190493
It must be noted that to centre and standardise table X, we can more
simply use the scale function. The by function, however, is used to apply one
function to a whole data-frame for the different levels of a factor or list of
factors. This function is thus similar to the tapply function, as it is used with
a data-frame rather than a vector. Let us generate some data:
> set.seed(1234)
> T <- rnorm(100)
> Z <- rnorm(100)+3*T+5
> vec1 <- c(rep("A1",25),rep("A2",25),rep("A3",50))
> don <- data.frame(Z,T)
We can thus obtain a summary of each variable for each level of the factor
vec1:
> by(don,list(FactorA=vec1),summary)
FactorA: A1
Z T
Min. :-2,540 Min. :-2.3457
1st Qu.: 2.380 1st Qu.:-0.7763
Making Programs with R 95
function
Argument 1 expression 1
Argument 2 expression 2 unique
... ... result
Figure 4.1
Diagram of an R function.
Let us begin with a simple example: the sum of the first n integers. The
number n is an integer which is the input argument; the result is simply the
sum, as requested:
> mysum(4.325)
[1] 10
Warning message:
In mysum(4.325) : rounds 4.325 as 4
We now suggest a function with two input arguments: factor1 and factor2,
two qualitative variables. This function yields the contingency table as well
as the character vector for the levels of factor1 and factor2, for which the
combined sample size is zero. Here, more than one result will be given. As
one is a table and the other a character vector (or character matrix), these
two objects can neither be brought together in a matrix (as they are not of
the same type), nor can they be combined in a data-frame (as they are not
the same length). The only result will therefore be a list grouping together
these two results.
The function will therefore calculate the contingency table (table) and
then select the void cells. We then need to know the indices corresponding to
the void cells of the contingency table (which function, option arr.ind=TRUE)
and identify the names of the corresponding categories:
> myfun <- function(factor1,factor2) {
+ res1 <- table(factor1,factor2)
+ selection <- which(res1==0,arr.ind = TRUE)
+ res2 <- matrix("",nrow=nrow(selection),ncol=2)
+ res2[,1] <- levels(factor1)[selection[,1]]
+ res2[,2] <- levels(factor2)[selection[,2]]
+ return(list(tab=res1,level=res2))
+ }
If we call upon the function with the factors wool and tension:
> tension <- factor(c(rep("Low",5),rep("High",5)))
> wool <- factor(c(rep("Mer",3),rep("Ang",3),rep("Tex",4)))
This yields
> myfun(tension,wool)
$tab
factor2
factor1 Ang Mer Tex
High 1 0 4
Low 2 3 0
$level
[,1] [,2]
[1,] "High" "Mer"
[2,] "Low" "Tex"
To find out more about using a function, see Section 1.5.
100 R for Statistics
4.4 Exercises
Exercise 4.1 (Factorial)
1. Programme the factorial function: n! = n (n 1) 2 1 using the
prod function.
2. Programme the factorial function using a loop for.
2. Write a function so that all the ordered qualitative variables in a table can
be ventilated (see previous question).
Part II
Statistical Methods
101
This page intentionally left blank
Introduction to the Statistical
Methods Part
In this part we present most of the classical statistical methods in the form of
worked examples. Each method is illustrated using a concrete example.
The structure of the worked examples is always the same. We start with
a quick presentation of the method with theoretical reminders (section objec-
tive). Then follows a description of the dataset and the associated problem
(section example). We present the steps to solve this problem (section steps).
The example is processed step by step, and a brief commentary of the results
is given (section processing the example). In this part, the different instruc-
tions are given using command lines. However, wherever possible, we use the
R Commander interface available in the Rcmdr package (see Appendix A.3,
p. 267) which allows one to perform the various methods using easy-to-use
drop-down menus. In conclusion, a section called Taking Things Further
gives reference books or less-standard extensions to the method.
Finally, it must be noted that, as each worked example is designed to be
consulted independently of the others, there will certainly be some overlaps.
From a purely practical standpoint, each dataset analysed in each worked
example is available on the website
http://www.agrocampus-ouest.fr/math/RforStat
There are therefore two ways of importing data, both of which must be done
at the beginning of the study:
Either by saving the file in the working directory and using the following
command line
5.1 Installing R
Simply go to the CRAN website at http://cran.r-project.org/ and in-
stall the version of R best suited to your computers operating system. For
comprehensive documentation on the installation process, visit http://cran.
r-project.org/doc/manuals.
The [1] indicates that the first (and only) coordinate of the resulting vector
is 5.2. When R is waiting for the next instruction, the command prompt
becomes +. For example, if you type
> 1 -
106 R for Statistics
R will wait for the second part of the subtraction and the command prompt
will be +. To substract 2 from 1, type
+ 2
[1] -1
Generally, either a ) or " has been forgotten. Simply suggest one (or more)
bracket(s) or speech mark(s) to end the command.
5.5 Selection
To construct the vector y with coordinates 2 and 4 of vector x, type
> y <- x[c(2,4)]
> y
[1] 4 6
Select columns 1 and 3 of a matrix m and rows 2 and 5 respectively using
> m <- matrix(1:15,ncol=3) #creates the matrix
> m[,c(1,3)] #selects columns 1 and 3
> m[c(2,5),] #selects rows 2 and 5
In combination, rows 4 and 2 of columns 2 and 3 are obtained as follows:
> m[c(4,2),c(2,3)]
[,1] [,2]
[1,] 9 14
[2,] 7 12
A Quick Start with R 107
5.6 Other
All commands can be stopped using the shortcut Ctrl + c (or the STOP
icon). It is possible to reuse preceding commands using the up or down arrows.
Additionally, to obtain help for the mean function, type
> help(mean)
5.9 Graphs
To construct a histogram and then a boxplot for all the individuals of the
second variable, we write
> hist(dat[,2])
> boxplot(dat[,2])
If the second variable is called y, we can use the name of the variable rather
than selecting the column by writing dat[,"y"] instead of dat[,2]. If the
first variable is called x, y is represented in confrontation with x by writing
> plot(y~x,data=dat)
If the variable x is quantitative, the graph is a scatterplot. If x is qualitative,
the graph is a boxplot for each category of x.
6.1.2 Example
We examine the weight of adult female octopuses. We have access to a sample
from 240 female octopuses fished off the coast of Mauritania. For the overall
population, we would like to obtain an estimation of the mean of the weight
and a confidence interval for this mean with a threshold of 95%.
6.1.3 Steps
1. Read the data.
2. Calculate the descriptive statistics (mean, standard deviation).
3. Construct a histogram.
4. (Optional) Test the normality of the data.
110 R for Statistics
Weight
Min. : 40.0
1st Qu.: 300.0
Median : 545.0
Mean : 639.6
3rd Qu.: 800.0
Max. :2400.0
Check that the sample size is equal to n = 240. Then check that the vari-
able Weight is indeed quantitative and obtain the minimum, the mean, the
quartiles and the maximum for this variable.
> mean(octopus)
Weight
639.625
> sd(octopus)
Weight
445.8965
3. Constructing a histogram:
Prior to any statistical analysis, it can be useful to represent the data graph-
ically. As the data is quantitative, a histogram is used:
> hist(octopus[,"Weight"],main="",nclass=15,freq=FALSE,
ylab="Density",xlab="Weight")
is not compulsory but can be used to choose the number of intervals. The
histogram (Figure 6.1) gives an idea of the weight distribution.
0.0012
0.0008
Density
0.0004
0.0000
Weight
Figure 6.1
Histogram of octopus weight.
> t.test(octopus$Weight,conf.level=0.95)$conf.int
[1] 582.9252 696.3248
attr(,"conf.level")
[1] 0.95
guarantees that, if it is repeated infinitely with new samples of the same size
n = 240, then 95% of the confidence intervals will contain the true unknown
value of . In practice, it is faster to say that the true mean is between
583 g and 696 g at a confidence level of 95%.
It is also possible tocalculate the confidence interval by hand by calculating
xt239 (0.975)/ n, where t239 (0.975) is the quantile 97.5% of the Students
t-distribution with 239 degrees of freedom:
> mean(octopus$Weight)-qt(0.975, df=239)*
sd(octopus$Weight)/sqrt(240)
[1] 582.9252
> mean(octopus$Weight)+qt(0.975, df=239)*
sd(octopus$Weight)/sqrt(240)
[1] 696.3248
where nij is the number of individuals (observed frequency) who take the
category i of the first variable and the category j of the second, and Tij
corresponds to the expected frequency under the null hypothesis, and I and
J are the number of categories for each
P
of the variables.P Thus, Tij = npi pj
nij n
with n the total sample size, pi = jn and pj = in ij . Under H0 , 2
follows a chi-square distribution with (I 1) (J 1) degrees of freedom.
If there is no independence, it is interesting to calculate the contribution of
pairs of categories to the chi-square statistic in order to observe associations
between categories.
6.2.2 Example
This is a historic example by Fisher. He studied the hair colour of boys and
girls in a Scottish county:
6.2.3 Steps
1. Input the data.
114 R for Statistics
Boys
200 400 600 800
Observed frequency
Hair colour
Girls
600
Observed frequency
400
200
0
Hair colour
Figure 6.2
Distributions of hair colour by sex.
Chi-Square Test of Independence 115
This yields the distributions of hair colour according to sex which are repre-
sented in Figure 6.2. There is very little difference between these row profiles.
The link between the two variables is not immediately obvious; it can there-
fore be useful to conduct a test.
P
The column profiles are calculated in the same way (nij / i nij ) while speci-
fying that this time we are working on columns (margin = 2):
With regards to the column profiles, the greatest differences are for those with
jet black hair: 72% of those with jet black hair are boys.
4. Constructing the chi-square test:
In order to conduct the test of independence between the variables sex and
hair colour, we calculate the 2obs value and determine the p-value associated
with the test:
data: colour
X-squared = 10.4674, df = 4, p-value = 0.03325
The p-value indicates that such a big 2obs value would have 3.32% probability
of being observed in a sample of this size if hair colour were independent of
sex. Thus, at the 5% threshold, we reject the independence hypothesis and
conclude that, according to this dataset, hair colour depends on sex.
5. Calculating the contributions to the 2 -statistic:
This relationship between the two variables can be studied in greater detail
(n T )2
by calculating the contributions ijTij ij to the 2obs statistic. The square
roots of these contributions are in the object residuals. By dividing each
term by the total (i.e. the 2obs value contained in the stat object), we obtain
the contributions expressed as percentage:
> round(results$residuals, 3)
Fair Red Medium Dark Jet Black
Boys -0.903 0.202 0.825 -0.549 1.723
Girls 0.979 -0.219 -0.896 0.596 -1.870
It can be said that the number of boys with jet black hair is greater than
expected (implied: greater than if the independence hypothesis were true)
and there are fewer girls than expected.
way, we can obtain the row or column percentages, the contributions and
the test statistic:
Statistics Contingency tables
Enter and analyze two-way table...
To use the existing data in the form of a table of individuals vari-
ables (see Section 6.2.6, Taking Things Further). We then use the
menu Statistics Contingency tables Two-way table to select
two variables and obtain a contingency table. This procedure yields the
result of the test and the contributions to the 2 statistic.
6.3.2 Example
We want to compare the weights of male and female adult octopuses. We
have a dataset with fifteen male and thirteen female octopuses fished off the
coast of Mauritania. Table 6.1 features an extract from the dataset.
TABLE 6.1
Extract from the Octopus
Dataset (weight in grams)
Weight Sex
300 Female
700 Female
850 Female
.. ..
. .
5400 Male
Comparison of Two Means 119
We would like to test the equality of the unknown theoretical mean female
(1 ) and male (2 ) octopus weights, with a type-one error rate set at 5%.
6.3.3 Steps
1. Read the data.
2. Compare the two sub-populations graphically.
3. Calculate the descriptive statistics (mean, standard deviation, and quar-
tiles) for each sub-population.
4. (Optional) Test the normality of the data in each sub-population.
5. Test the equality of variances.
6. Test the equality of means.
> summary(octopus)
Weight Sex
Min. : 300 Female:13
1st Qu.:1480 Male :15
Median :1800
Mean :2099
3rd Qu.:2750
Max. :5400
Using this procedure, we obtain descriptive statistics and we can check that
the variable Weight is quantitative and that the variable Sex is qualita-
tive. Be aware that if, for example, the categories of the variable Sex were
coded 1 and 2, it would be necessary to transform this variable into a fac-
tor prior to any analysis as R would consider it to be quantitative (see Sec-
tion 1.4.2, p. 7). In order to do so, we would write octopus[,"Sex"] <-
factor(octopus[,"Sex"]).
2. Comparing the two sub-populations graphically:
Before any analysis, it may be interesting to visualise the data. Boxplots are
used to compare the distribution of weights in each category of the variable
Sex:
120 R for Statistics
Figure 6.3 shows that the males are generally heavier than the females as both
weight medians and quartiles are greater for males.
5000
4000
3000
Weight
2000
1000
Female Male
Sex
Figure 6.3
Boxplots of octopus weights according to sex.
$Male
0% 25% 50% 75% 100%
1150 1800 2700 3300 5400
if the data is normally distributed or if the sample size is large enough (in
practice, greater than 30, thanks to the central limit theorem). Here there are
less than thirty individuals; therefore data normality must be tested for each
sub-population. In order to do so, the Shapiro-Wilk test is used. To test the
normality of the males alone, select the weight of the males by requiring that
the qualitative variable Sex carries the category Male. Rows are selected by
building the logic vector select.males. The components of this vector are
TRUE for males and are otherwise FALSE. We then conduct the Shapiro-Wilk
test on the individuals in this selection. Before conducting the Shapiro-Wilk
test, we draw the normal QQ-plot that gives an account of probability plotting
for informal assessment of normality (see Figure 6.4):
Figure 6.4
Normal QQ-plot for male octopus weights.
As the p-value associated with the test is greater than 5%, the normality of
the males weights is accepted. We will not provide the output for the females,
but the assumption of normality is also accepted.
When the assumption of normality is rejected, the test of equality of means can
be conducted using non-parametric tests such as that of Wilcoxon (wilcox.test)
or Kruskal-Wallis (kruskal.test).
5. Testing the equality of variances:
In order to compare the mean of the two male and female sub-populations,
there are two possible types of tests: one in which the unknown variances of the
two sub-populations are different and the other in which they are equal. We
must therefore test the equality of variances H0 : 12 = 22 against H1 : 12 6= 22
using an F -test:
The p-value (0.001) associated with the test of unequal variances indicates
that the means are significantly different. The mean weight for males within
the population (estimated at 2700 g) is thus significantly different from that
of females (estimated at 1405 g).
6.4.2 Example
We are interested in the intention to vote for a candidate A in the second
round of presidential elections. In a poll of 1040 voters, candidate A wins
52.4% of the vote. Can we consider to the 95% threshold that this candidate
will win the election? Implicitly, we assume that the sample is representative
of the population as a whole, drawn with replacement, and that the voters
will vote as predicted by the poll. However, simple random polls use samples
drawn without replacement. Nevertheless, the procedure explained below is a
reasonable approximation when the poll rate is low, that is, when the size of
the population is much greater than that of the sample.
6.4.3 Step
Test the equality of the proportion to 50% with a type-one error rate of 5%.
6.5.2 Example
We shall again examine the example of hair colour for boys and girls in a
Scottish county (see Worked Example 6.2, p. 113):
We would like to compare the proportions of boys for different groups, which
here is different hair colours. We will test whether or not these proportions
are equal in all the groups, with a type-one error set at 5%.
6.5.3 Step
Test the equality of proportions for boys (for different hair colours).
test which is performed is a chi-square test of independence and the results are
exactly the same as the ones obtained in the worked example on chi-square
(p. 113).
6.6.2 Example
We now examine the example of an experiment in milk production. Re-
searchers at INRA (the French National Institute for Agricultural Research)
selected two genetically different types of dairy cow according to the volume
of milk produced. The aim is to detect a potential difference in the protein
levels in the milks produced by these two sub-populations.
During a previous study, the standard deviation of protein levels in the
milk from a herd of Normandy cows was found to be 1.7 g/kg of milk. As
an approximation we will therefore use the standard deviation = 1.7 and
use the classical = 5% threshold. The aim is to have = 80% chance
of detecting a difference in the means of the protein levels of = 1 g/kg of
milk from the two populations. To meet this objective, we will determine the
number of dairy cows required using the function power.t.test.
6.6.3 Steps
1. Calculate the number of individuals required to obtain a power of 80%.
2. Calculate the power of the test with twenty individuals per group.
3. Calculate the difference of means detectable at 80% with twenty individuals
per group.
130 R for Statistics
n = 46.34674
delta = 1
sd = 1.7
sig.level = 0.05
power = 0.8
alternative = two.sided
Here we can see that a minimum of 47 individuals are required per population
in order to have more than an 80% chance of detecting a difference in means
in the protein levels of 1 g/kg of milk between the two populations.
2. Calculating the power of the test with twenty individuals per group:
If we decide to use only twenty cows per population in the experiment, there
will be a 44% chance of detecting a difference in means in the protein levels
of 1 g/kg of milk between the two populations.
3. Calculating the difference detectable at 80% with twenty individuals per
group:
If we decide to use only twenty cows per population in the experiment, there
will then be an 80% chance of detecting a difference in means in the protein
levels of 1.55 g/kg of milk between the two populations.
groups = 3
n = 7.067653
between.var = 2.3333
within.var = 2.89
sig.level = 0.05
power = 0.8
7.1.2 Example
Air pollution is currently one of the most serious public health worries world-
wide. Many epidemiological studies have proved the influence that some chem-
ical compounds such as sulphur dioxide (SO2 ), nitrogen dioxide (NO2 ), ozone
(O3 ) or other air-borne dust particles can have on our health.
Associations set up to monitor air quality are active all over France to
measure the concentration of these pollutants. They also keep a record of
meteorological conditions such as temperature, cloud cover, wind, etc.
Here we analyse the relationship between the maximum daily ozone level
(in g/m3 ) and temperature. We have at our disposal 112 observations col-
lected during the summer of 2001 in Rennes (France).
7.1.3 Steps
1. Read the data.
2. Represent the scatterplot (xi , yi ).
3. Estimate the parameters.
4. Draw the regression line.
> plot(maxO3~T12,data=ozone,pch=15,cex=.5)
Simple Linear Regression 135
Each point on the graph (Figure 7.1) represents, for a given day, a temperature
measurement taken at midday, and the ozone peak for the day. From this
graph, the relationship between temperature and ozone concentration seems
to be linear.
160
140
120
maxO3
100
80
60
40
15 20 25 30
T12
Figure 7.1
Representation of pairs (xi , yi ) with a scatterplot.
Residuals:
Min 1Q Median 3Q Max
-38.0789 -12.7352 0.2567 11.0029 44.6714
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -27.4196 9.0335 -3.035 0.003 **
T12 5.4687 0.4125 13.258 <2e-16 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Amongst other things, we obtain a Coefficient matrix which, for each pa-
136 R for Statistics
rameter (each row), has four columns: its estimation (Estimate column), its
estimated standard deviation (Std. Error), the observed value statistic for
the test H0 : j = 0 against H1 : j 6= 0. Finally, the p-value (Pr(>|t|))
yields the probability to obtain such a large statistic under H0 .
Coefficients 0 and 1 are estimated by 27.4 and 5.5. The test of significance
for the coefficients here gives p-values of 0.003 and around 2. 1016 . Thus, the
null hypothesis for each of these tests is rejected in favour of the alternative
hypothesis. The p-value of less than 5% for the constant (intercept) indicates
that the constant must appear in the model. The p-value less than 5% for the
slope indicates a significant link between maxO3 and T12.
The summary of the estimation step features the estimation of residual stan-
dard deviation , which here is 17.57, as well as the corresponding number of
degrees of freedom n 2 = 110.
The value of R2 and adjusted R2 are also given. The value of R2 is rather high
(R2 = 0.6151), thus supporting the suspected linear relationship between the
two variables. In other words, 61% of the variability of maxO3 is explained by
T12.
The final row, which is particularly useful for multiple regression, indicates
the result of the test of the comparison between the model used and the model
using only the constant as explanatory variable.
We can consult the list of different results (components of the list, see Sec-
tion 1.4.7, p. 20) of the object simple.reg using
> names(simple.reg)
[1] "coefficients" "residuals" "effects" "rank"
[5] "fitted.values" "assign" "qr" "df.residual"
[9] "xlevels" "call" "terms" "model"
> simple.reg$coef
(Intercept) T12
-27.419636 5.468685
> coef(simple.reg)
> plot(maxO3~T12,data=ozone,pch=15,cex=.5)
> abline(simple.reg)
160
140
120
maxO3
100
80
60
40
15 20 25 30
T12
Figure 7.2
Scatterplot and regression line.
> res.simple<-rstudent(simple.reg)
> plot(res.simple,pch=15,cex=.5,ylab="Residuals")
> abline(h=c(-2,0,2),lty=c(2,1,2))
3
2
1
Residuals
0
1
2
3
0 20 40 60 80 100
Index
Figure 7.3
Representation of residuals.
138 R for Statistics
In theory, 95% of the studentised residuals can be found in the interval [2, 2].
This is the case here because only four residuals are found outside this interval.
It must be noted that the xnew argument of the predict function must be a
data-frame in which the names of the explanatory variables are the same as in
the original dataset (here T12). The predicted value is 76.5 and the prediction
interval at 95% is [41.5, 111.5]. To represent, on one single graph, the confi-
dence interval of a fitted value along with that of a prediction, we calculate
these intervals for all the points which were used to draw the regression line.
We make the two appear on the same graph (Figure 7.4).
conf
160
pred
140
120
maxO3
100
80
60
40
15 20 25 30
T12
Figure 7.4
Confidence and prediction intervals.
y1 1 x11 ... x1p 0 1
Y = ... ,
.. .. .. .. ,
X = . = ... , = ...
. . .
yn 1 xn1 . . . xnp p n
From these observations, we estimate the unknown parameters of the model
by minimising the least-square criterion:
2
n
X Xp
= argmin yi 0 j xij = argmin(Y X)0 (Y X)
0 , ,p i=1 j=1 Rp+1
If matrix X is full rank, that is to say, if the explanatory variables are not
collinear, the least square estimator of is
= (X0 X)1 X0 Y
Once the parameters have been estimated, we can calculate the fitted
values:
yi = 0 + 1 xi1 + + p xip
or predict new values. The difference between the observed value and the
fitted value is, by definition, the residual:
i = yi yi
Analysing the residuals is essential as it is used to check the individual fitting
of the model (outlier) and the global fitting, for example by checking that
there is no structure.
Multiple Linear Regression 141
7.2.2 Example
We reexamine the ozone dataset introduced in Worked Example 7.1 (p. 133).
Here we analyse the relationship between the maximum daily ozone level (in
g/m3 ) and temperature at different times of day, cloud cover at different
times of day, the wind projection on the East-West axis at different times
of day and the maximum ozone concentration for the day before the day in
question. The data was collected during the summer of 2001 and the sample
size is 112.
7.2.3 Steps
1. Read the data.
2. Represent the variables.
3. Estimate the parameters.
4. Choose the variables.
5. Conduct a residual analysis.
6. Predict a new value.
Names are attributed using the names or colnames function. It can be inter-
esting to summarise these variables (summary) or to represent them.
2. Representing the variables:
In order to make sure that there are no errors during input, it is important to
conduct a univariate analysis of each variable (histogram, for example). When
there are not many variables, they can be represented two by two on the same
graph using the pairs function (here pairs(ozone.m)). It is also possible to
explore the data using a principal component analysis with the illustrative
explanatory variable (see Worked Example 10.1, p. 209).
142 R for Statistics
Call:
lm(formula = maxO3 ~ ., data = ozone.m)
Residuals:
Min 1Q Median 3Q Max
-53.566 -8.727 -0.403 7.599 39.458
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.24442 13.47190 0.909 0.3656
T9 -0.01901 1.12515 -0.017 0.9866
T12 2.22115 1.43294 1.550 0.1243
T15 0.55853 1.14464 0.488 0.6266
Ne9 -2.18909 0.93824 -2.333 0.0216 *
Ne12 -0.42102 1.36766 -0.308 0.7588
Ne15 0.18373 1.00279 0.183 0.8550
Wx9 0.94791 0.91228 1.039 0.3013
Wx12 0.03120 1.05523 0.030 0.9765
Wx15 0.41859 0.91568 0.457 0.6486
maxO3y 0.35198 0.06289 5.597 1.88e-07 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
> library(leaps)
> choice <- regsubsets(maxO3~.,data=ozone.m,nbest=1,nvmax=11)
> plot(choice,scale="bic")
140
140
130
130
120
bic
120
120
110
110
97
(Intercept)
T9
T12
T15
Ne9
Ne12
Ne15
Wx9
Wx12
Wx15
maxO3y
Figure 7.5
Choosing variables with BIC.
We can also determine the variables for the best model according to the BIC
criterion using the following line of code:
144 R for Statistics
> summary(choice)$which[which.min(summary(choice)$bic),]
(Intercept) T9 T12 T15 Ne9 Ne12
TRUE FALSE TRUE FALSE TRUE FALSE
Ne15 Wx9 Wx12 Wx15 maxO3y
FALSE TRUE FALSE FALSE TRUE
The criterion is optimal for the top row of the graph. For the Bayesian in-
formation criterion (argument scale="bic"), we retain the model with four
variables: T12, Ne9, Wx9 and maxO3y. We thus fit the model using the selected
variables:
Call:
lm(formula = maxO3 ~ T12 + Ne9 + Wx9 + maxO3y, data = ozone.m)
Residuals:
Min 1Q Median 3Q Max
-52.396 -8.377 -1.086 7.951 40.933
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.63131 11.00088 1.148 0.253443
T12 2.76409 0.47450 5.825 6.07e-08 ***
Ne9 -2.51540 0.67585 -3.722 0.000317 ***
Wx9 1.29286 0.60218 2.147 0.034055 *
maxO3y 0.35483 0.05789 6.130 1.50e-08 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
In theory, 95% of the studentised residuals can be found in the interval [2, 2].
Multiple Linear Regression 145
This is the case here where only three residuals are found outside this interval
(see Figure 7.6).
3
2
1
Residuals
0
1
2
3
0 20 40 60 80 100
Index
Figure 7.6
Representation of residuals.
The predicted value is 72.5 and the confidence interval at 95% for the predic-
tion is [43.8, 101.2].
7.3.2 Example
We would like to predict the organic carbon content in the ground from spec-
troscopic measurements. As measuring the organic carbon content of the
ground is more expensive than collecting spectroscopic measurements, we
would like to be able to predict organic carbon content from spectroscopic
measurements.
In order to construct a prediction model, we have access to a database1
with 177 soils (individuals) and both their organic carbon content (OC, re-
sponse variable) and the spectra acquired in the visible and near-infrared
range (400 nm2500 nm), which gives 2101 explanatory variables. We will
then predict the organic carbon content of three new soils.
1 Thank to Y. Fouad and C. Walter; Achi H., Fouad Y., Walter C., Viscarra Rossel R.A.,
Lili Chabaane Z. and Sanaa M. (2009). Regional predictions of soil organic carbon content
from spectral reflectance measurements. Biosystems Engineering, 104, 442446.
148 R for Statistics
7.3.3 Steps
1. Read the data.
2. Represent the data.
3. Conduct a PLS regression after choosing the number of PLS components.
4. Conduct a residual analysis.
5. Predict a new value.
The rug function is used to add vertical lines to the abscissa for the observa-
tions of variable Y . We can see in Figure 7.7 that one observation is extreme:
It is greater than 8, whereas all the others are less than 5. The following lines
can be used to show that individual 79 has a very high value (8.9) The other
values do not exceed 4.89.
> which(Spectrum[,1]>8)
[1] 79
> Spectrum[79,1]
Partial Least Squares (PLS) Regression 149
0.4
0.3
Density
0.2
0.1
0.0
0 2 4 6 8
Figure 7.7
Representation of organic carbon content.
[1] 8.9
> max(Spectrum[-79,1])
[1] 4.89
This individual is unusual and can be removed when constructing the model:
The explanatory variables are spectra and can be represented by curves. Each
spectrum can be represented by a different line (continuous, dashed, etc.)
and/or colour, depending on the value of the OC variable. To construct this
graph, we divide the OC variable into seven classes of almost equal size using
the cut function; the points at which they are cut are determined by the
quantiles (quantile). The factor obtained in this way is converted to a colour
and line type code using the as.numeric function and the arguments col and
lty. On the abscissa axis, we use the wavelengths:
We obtain the graph in Figure 7.8 in which each individual is a different curve.
It would seem possible to distinguish groups of curves drawn with the same
code, which means that similar curves admit similar values for the OC variable.
3. Conducting a PLS regression after choosing the number of PLS compo-
nents:
To conduct this regression, we use the preinstalled pls package that must be
150 R for Statistics
2
1
Reflectance
0
1
2
3
4
Wavelength
Figure 7.8
Representation of individuals (after a standardisation by row).
loaded (see Section 1.6). By default, this package centres all the variables
but does not reduce them. We reduce the explanatory variables using the
argument scale. We must also fix a maximum number of PLS components
(the higher this number, the longer it will take to run the program). To be
safe, we fix this number at 100, which is already extremely high.
> library(pls)
> pls.model <- plsr(OC~., ncomp=100, data=Spectrum,
scale=TRUE, validation="CV")
With the pls package, the number of PLS components is, by default, deter-
mined by cross validation. We calculate the fitting error and the prediction
error obtained with 1, 2, . . . , 100 PLS components and we then draw the two
types of error using the plot.mvr function (Figure 7.9):
The two error curves are typical: a continual decline for the fitting error de-
pending on the number of components and a decrease followed by an increase
for the prediction error. The optimal number of components for the prediction
corresponds to the value for which the prediction error is minimal:
Partial Least Squares (PLS) Regression 151
train
CV
1.0
0.8
MSEP
0.6
0.4
0.2
0.0
0 20 40 60 80 100
Number of components
Figure 7.9
Evolution of both errors depending on the number of PLS components.
No individuals are so extreme that they need to be removed from the analysis.
It must be noted that if individual 79 had not been deleted previously, it would
have had a very large residual and would have been deleted at this point.
The variability of residuals is greater for the last individuals of the dataset
(individuals coded by rmqs) and smaller for the first (individuals coded by
Bt). The samples from the units analysed do not seem very homogeneous.
152 R for Statistics
2.0
1.5
1.0
0.5
Residuals
0.5
1.5
0 50 100 150
Index
Figure 7.10
Representation of residuals.
> ReflecN[,1]
[1] 1.08 1.60 1.85
Partial Least Squares (PLS) Regression 153
Comp 1
Comp 2
Comp 3
Comp 4
0.002
Regression coefficient
0.000
0.002
Wavelength
Figure 7.11
Coefficients of the PLS regression for one to four-component models.
The interpretation of this type of graph goes beyond the objectives set out
in this work and we will simply underline the fact that there are coefficients
for each reflectance of a given wavelength. They can therefore be interpreted
in the same way as those of a multiple regression, the only difference being
that here they are all on the same scale, as the initial variables (reflectances)
are centred and reduced.
It is possible to represent the cloud of individuals on the PLS components
(called PLS scores) in order to visualise the similarities between individuals,
154 R for Statistics
50
Comp 2 (32 %)
50
Comp 1 (52 %)
Figure 7.12
Analysis of the model by component: components 1 and 2.
Since the variables (here the wavelengths) are sorted, the evolution of their
relationship with the PLS components can be represented using the argument
plottype="loadings" (Figure 7.14):
> plot(pls.model,plottype="loadings",labels=400:2500,comps=1:ncp,
legendpos="topright",xlab="Wavelength",ylab="Loadings")
> abline(h=0,lty=2)
Partial Least Squares (PLS) Regression 155
1.0
0.5
Comp 2 (32 %)
0.0
0.5
1.0
Comp 1 (52 %)
Figure 7.13
Representation of the correlations with the loadings.
Comp 1 (51.8 %)
0.06
Comp 2 (32.1 %)
Comp 3 (7.4 %)
Comp 4 (2.5 %)
0.04
0.02
Loadings
0.00
0.02
0.04
0.06
Wavelength
Figure 7.14
Representation of loadings.
8
Analysis of Variance and Covariance
where ni is the sample size of category i, yij the observed value j for sub-
population i, i the mean of sub-population i, and ij the residual of the
model. An individual is therefore here defined by the pair (i, j). The analysis
of variance thus consists of testing the equality of the i s.
The model can also be written in the following classical format:
yij = + i + ij
where is the overall mean and i is the specific effect of category i. In this
latest formulation, there are I + 1 parameters to be estimated, of which only
I are identifiable. A linear constraint must therefore be imposed. There are
a number of different constraints and the most common are
One of the i is set at zero, which means that level i is considered the
reference level.
The sum of all i s is null, so the mean is taken as reference.
The numerical estimations of the parameters, obtained from the least
squares method, depend of course on the chosen constraint. Nevertheless,
158 R for Statistics
H0 : i i = 0 compared with H1 : i i 6= 0.
These hypotheses require us to test the sub-model with the complete model:
In this table, the individuals are indexed conventionally in the format yij
where i is the row index and j the column index. This presentation of the
data indeed validates the determination of the model. Nevertheless, prior to
the analysis, the data must be transformed by constructing a classic table of
individuals variables by considering one variable for the factor and another
for the quantitative variable. For example, this yields Table 8.2.
8.1.2 Example
We reexamine in more detail the ozone dataset of Worked Example 7.1 (p. 133).
Here we analyse the relationship between the maximum daily ozone concen-
tration (in g/m3 ) and wind direction classed by sector (North, South, East,
West). The variable wind has I = 4 levels. We have at our disposal 112 pieces
of data collected during the summer of 2001 in Rennes (France).
8.1.3 Steps
1. Read the data.
One-Way Analysis of Variance 159
TABLE 8.2
Presentation of the Data Table
Wind Ozone
North 87
North 82
North 114
South 90
East 92
West 94
... ...
During the summer of 2001, the most common wind direction was West, and
there were very few days with an East wind.
2. Representing the data:
Prior to an analysis of variance, boxplots are usually constructed for each level
of the qualitative variable. We therefore present the distribution of maxO3
according to wind direction:
> plot(maxO3~wind,data=ozone,pch=15,cex=.5)
160 R for Statistics
Looking at the graph in Figure 8.1, it would seem that there is a wind effect.
This impression can be validated by testing the significance of the factor.
160
140
120
maxO3
100
80
60
40
wind
Figure 8.1
Boxplots of maxO3 according to the levels of the variable wind.
Response: maxO3
Df Sum Sq Mean Sq F value Pr(>F)
wind 3 7586 2528.69 3.3881 0.02074 *
Residuals 108 80606 746.35
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
The first column indicates the factors associated degrees of freedom, the sec-
ond the sum of the squares, and the third the mean square (sum of the squares
divided by the degrees of freedom). The fourth column features the observed
value of the test statistic. The fifth column (Pr(>F)) contains the p-value,
that is to say, the probability that the test statistic under H0 will exceed the
estimated value. The p-value (0.02) is less than 5%, thus H0 is rejected and
we can accept the significance of wind at the 5% level. There is therefore at
least one wind direction for which the maximum level of ozone is significantly
different from the others.
One-Way Analysis of Variance 161
0 20 40 60 80 100
South West
L 3
L
L LL
L L 2
L L
L
L L 1
L
L
L LL L L L L
L L L L 0
L L L L LL
L LL L LL L L
L
L
L L L L L LLL
LLL LL L LL L
L L L LL L LLL L 1
L L
L 2
Residuals
East North
3
L
L
2
L
L
1 L
L L
L L
L L L L
L L L
0 L LL LL LL L
L
L
L L
L L LL LLL
L
L
1 LL
L
2
L
3
0 20 40 60 80 100
Figure 8.2
Representation of the residuals according to the levels of the variable wind.
162 R for Statistics
In theory, 95% of the studentised residuals can be found in the interval [2, 2].
Here there are nine residuals outside of the interval, that is, 8%, which is
acceptable. The distribution of the residuals seems to be comparable from
one category to another.
5. Interpreting the coefficients:
Now that an overall wind effect has been identified, we must examine how
direction influences maximum ozone levels. In order to do so, the coefficients
are analysed using the Students t-test.
As mentioned above, there are a number of different ways of writing the one-
way analysis of variance model. By default, R uses the constraint 1 = 0,
which means taking as reference the first label of the variable alphabetically,
which here is East.
> summary(reg.aov1)
Call:
lm(formula = maxO3 ~ wind, data = ozone)
Residuals:
Min 1Q Median 3Q Max
-60.600 -16.807 -7.365 11.478 81.300
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 105.600 8.639 12.223 <2e-16 ***
windNorth -19.471 9.935 -1.960 0.0526 .
windSouth -3.076 10.496 -0.293 0.7700
windWest -20.900 9.464 -2.208 0.0293 *
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Amongst other things, we obtain a Coefficient matrix which, for each pa-
rameter, (each line), has four columns: its estimation (Estimate column), its
estimated standard deviation (Std. Error), the observed value of the test un-
der H0 : i = 0 against H1 : i 6= 0. Finally, the associated p-value (Pr(>|t|))
yields the probability of exceeding the estimated value.
The estimate of , here denoted Intercept, is the mean East wind concen-
tration in maxO3. The other values obtained correspond to the deviation from
this mean for each wind cell.
The last three lines of column Pr(>|t|) correspond to the test H0 : i = 0.
The following question can thus be answered: Is there a similarity between the
One-Way Analysis of Variance 163
wind considered and the wind from reference cell East? The South wind is not
different, unlike the West wind. The p-value associated with the North wind
is slightly more than 5% and thus we cannot confirm a significant difference
from the East wind.
If we want to select another specific control cell, for example the second label,
which in this case is North, it can be indicated as follows:
> summary(lm(maxO3~C(wind,base=2),data=ozone))
Call:
lm(formula = maxO3 ~ C(wind, base = 2), data = ozone)
Residuals:
Min 1Q Median 3Q Max
-60.600 -16.807 -7.365 11.478 81.300
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 86.129 4.907 17.553 <2e-16 ***
C(wind, base = 2)1 19.471 9.935 1.960 0.0526 .
C(wind, base = 2)3 16.395 7.721 2.123 0.0360 *
C(wind, base = 2)4 -1.429 6.245 -0.229 0.8194
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
> summary(lm(maxO3~C(wind,sum),data=ozone))
Call:
lm(formula = maxO3 ~ C(wind, sum), data = ozone)
Residuals:
Min 1Q Median 3Q Max
-60.600 -16.807 -7.365 11.478 81.300
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 94.738 3.053 31.027 <2e-16 ***
C(wind, sum)1 10.862 6.829 1.590 0.1147
C(wind, sum)2 -8.609 4.622 -1.863 0.0652 .
C(wind, sum)3 7.786 5.205 1.496 0.1376
---
164 R for Statistics
4 = 1 2 3 = 10.038
In all these analyses, it can be seen that the values of the estimates change
depending on the constraint. On the other hand, the overall test given in the
last row of the lists, which corresponds to the result of the analysis of variable
table, remains the same.
If within the same session, we want to carry out multiple analyses of variance
with the same constraint, it is preferable to use
> summary(lm(maxO3~wind,data=ozone))
where is the overall mean, i is the effect due to the i-th level of the factor
A, j is the effect due to the j-th level of the factor B, ij is the interaction
effect when factor A is at level i and factor B is at level j, and ijk is the
residual term.
These hypotheses involve testing the sub-model against the complete model:
This test, which identifies the overall influence of the interaction of the factors,
is simply a test between two nested models.
We can then proceed in the same manner to test the effect of a single
Multi-Way Analysis of Variance with Interaction 167
factor. For example, in order to test the effect of factor A, we test the sub-
model without factor A against the model with factor A but without the
interaction:
= 0, i, i = 0, j, j = 0
1 = 0, 1 = 0, i, i1 = 0, j, 1j = 0
3. Sum constraints:
X X X X
i = 0, j = 0, i, ij = 0, j, ij = 0
i j j i
8.2.2 Example
We reexamine in more detail the ozone dataset of Worked Example 7.1 (p. 133).
Here we will analyse the relationship between the maximum daily ozone con-
centration (in g/m3 ) and wind direction classed into sectors: North, South,
East, West and precipitation classed into two categories: Dry and Rainy. We
have at our disposal 112 pieces of data collected during the summer of 2001
in Rennes (France). Variable A admits I = 4 levels and variable B has J = 2
levels.
8.2.3 Steps
1. Read the data.
2. Represent the data.
3. Choose the model.
4. Interpret the coefficients.
168 R for Statistics
> boxplot(maxO3~wind*rain,data=ozone)
160
140
120
100
80
60
40
Figure 8.3
Boxplot of max03 according to the confrontation of the levels of the variables
wind and rain.
It must be noted that the ozone level is generally higher during dry weather
than during rainy weather. Another way of graphically representing the in-
teraction is as follows (see Figure 8.4):
> par(mfrow=c(1,2))
> with(ozone,interaction.plot(wind,rain,maxO3))
> with(ozone,interaction.plot(rain,wind,maxO3))
Multi-Way Analysis of Variance with Interaction 169
rain wind
110
110
Dry South
Rainy West
100
100
mean of maxO3
mean of maxO3
East
North
90
90
80
80
70
70
East North South West Dry Rainy
wind rain
Figure 8.4
Interaction graph depicting wind (left) and rain (right) on the x-axis.
Response: maxO3
Df Sum Sq Mean Sq F value Pr(>F)
wind 3 7586 2528.7 4.1454 0.00809 **
rain 1 16159 16159.4 26.4910 1.257e-06 ***
wind:rain 3 1006 335.5 0.5500 0.64929
Residuals 104 63440 610.0
The first column indicates the factors associated degrees of freedom, the sec-
ond the sum of the squares, and the third the mean square (sum of the squares
divided by the degrees of freedom). The fourth column features the observed
value of the test statistic. The fifth column (Pr(>F)) contains the p-value,
that is to say, the probability that the test statistic under H0 will exceed the
estimated value. The p-value (0.65) is more than 5%, thus we retain H0 and
conclude that the interaction is not significant. We therefore consider that
the model has no interaction (8.2).
Response: maxO3
Df Sum Sq Mean Sq F value Pr(>F)
wind 3 7586 2528.7 4.1984 0.007514 **
rain 1 16159 16159.4 26.8295 1.052e-06 ***
Residuals 107 64446 602.3
The two factors are significant and there is therefore both a wind effect and a
rain effect on the maximum daily ozone level. We shall therefore retain this
model and analyse the coefficients.
4. Interpreting the coefficients:
The calculations depend on the constraints used. By default, the constraints
used by R are 1 = 0 and 1 = 0. However, here thereP is no reason
P to fix a
reference level. We therefore choose the constraints i i = 0 and j j = 0,
which yields
> summary(lm(maxO3~C(wind,sum)+C(rain,sum),data=ozone))
Call:
lm(formula = maxO3 ~ C(wind, sum) + C(rain, sum), data = ozone)
Residuals:
Min 1Q Median 3Q Max
-42.618 -15.664 -3.712 8.295 67.990
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 90.135 2.883 31.260 < 2e-16 ***
C(wind, sum)1 7.786 6.164 1.263 0.2093
C(wind, sum)2 -8.547 4.152 -2.059 0.0420 *
C(wind, sum)3 5.685 4.694 1.211 0.2285
C(rain, sum)1 12.798 2.471 5.180 1.05e-06 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Amongst other things, we obtain a Coefficient matrix with four columns for
each parameter (row). These include its estimation (column Estimate), its
estimated standard deviation (Std. Error), the observed value of the given
test statistic, and finally the p-value (Pr(>|t|)) which yields under H0 the
probability of exceeding the estimated value.
Multi-Way Analysis of Variance with Interaction 171
> summary(lm(maxO3~wind+rain+wind:rain,data=ozone))
> summary(lm(maxO3~wind+rain,data=ozone)) #without interaction
Figure 8.5
Illustration of the models (8.4), (8.5) and (8.6).
The more general model (8.4), which we refer to as the complete model,
is illustrated on the left in Figure 8.5. In this case, the slopes and intercepts
are assumed to be different for each level of Z.
A first simplification of this model is to suppose that variable X intervenes
in the same manner, whatever the level of variable Z. It amounts to assuming
that the slopes are identical, but that the intercepts are not. This is the model
illustrated in the centre of Figure 8.5, which is written as
The choice of model (8.4), (8.5) or (8.6) depends on the problem addressed.
We advise starting with the most general model (8.4) and then, if the slopes
are the same, we turn to model (8.5); if the intercepts are the same, we turn
to model (8.6). As the models are nested, it is possible to test one model
against another.
Once the model is chosen, the residuals must be analysed. This analysis
is essential as it is used to check the individual fit of the model (outliers) and
the global fit, for example by checking that there is no structure.
8.3.2 Example
We are interested in the balance of flavours between different ciders and, more
precisely, the relationship between the sweet and bitter flavours according
to the type of cider (dry, semi-dry or sweet). The flavour evaluations were
provided by a tasting panel (mean marks, from 1 to 10, by 24 panel members)
for each of 50 dry ciders, 30 semi-dry ciders and 10 sweet ciders.
Analysis of Covariance 175
8.3.3 Steps
1. Read the data.
2. Represent the data.
> plot(Sweetness~Bitterness,pch=as.numeric(Type),data=cider)
> legend("topright",levels(cider$Type),pch=1:nlevels(cider$Type))
176 R for Statistics
Sweetness
2 3 4 5 6 7 8
Bitterness
Figure 8.6
Representation of the scatterplot.
It is also possible to represent the data using the xyplot function from the
lattice package (see Figure 8.7):
> library(lattice)
> xyplot(Sweetness~Bitterness|Type,data=cider)
To have a more precise idea of the effect of the qualitative variable, we could
represent the regression line for each category but it is difficult to get an
idea of the most appropriate model simply by looking at these graphs. It is
therefore preferable to analyse the models.
3. Choosing the model:
The complete model defined by (8.4) is adjusted using the lm (linear model)
function:
Sweet
7
4
Sweetness
Dry Semidry
7
2 3 4 5 6 7 8
Bitterness
Figure 8.7
Scatterplots according to the categories of the variable Type.
> anova(interceptU,global)
178 R for Statistics
and this influence is the same for all types of cider (the slope is the same but
the intercepts are different).
> library(car)
> global <- lm(Sweetness~Type+Bitterness+Type:Bitterness,
data=cider)
> Anova(global, type="III")
Anova Table (Type III tests)
Response: Sweetness
Sum Sq Df F value Pr(>F)
(Intercept) 144.139 1 523.7375 < 2.2e-16 ***
Type 2.743 2 4.9830 0.009015 **
Bitterness 6.879 1 24.9968 3.098e-06 ***
Type:Bitterness 0.901 2 1.6374 0.200634
Residuals 23.118 84
From these results, the effects with the highest p-value, and those over 5%
can be eliminated.
Once the model has been chosen, it is possible to estimate the coefficients
of the model using the summary function:
> global <- lm(Sweetness~Type+Bitterness,data=cider)
> summary(global)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.30248 0.21230 29.687 < 2e-16 ***
TypeSemi-dry 0.41723 0.12293 3.394 0.00104 **
TypeSweet 0.83126 0.18797 4.422 2.84e-05 ***
Bitterness -0.31928 0.04423 -7.218 1.96e-10 ***
The coefficients are estimated using the constraint that the first coefficient
(that
P of the category Dry) is equal to 0. If we want to use the constraint
i i = 0, we use C(Type,sum) for the analysis of variance.
P(Y = 1|X = x) =
1 f1 (x)
0 f0 (x) + 1 f1 (x)
(9.1)
where f0 (x) and f1 (x) stand for the densities of X knowing that Y = 0 and
Y = 1, and 0 = P(Y = 0) and 1 = P(Y = 1) represent the prior probabili-
ties of belonging to classes 0 and 1, respectively. These two probabilities must
be fixed by the user or estimated from the sample. In order to calculate the
posterior probabilities P(Y = 0|X = x) and P(Y = 1|X = x), the discrimi-
nant analysis models the distributions of X knowing that Y = j (j = 0, 1) by
normal distributions. More specifically, we make the assumption
X|Y = 0 N (0 , ), X|Y = 1 N (1 , ).
variances (for further details, see McLachlan (2004), Chapter 18). For an
individual x = (x1 , . . . , xp ), the canonical variable defines a score function
S(x) = w1 x1 + . . . + wp xp . An individual x is allocated to a given group by
comparing the value of the score S(x) with a threshold value s.
Remark
The two ways of presenting LDA do not allow to handle qualitative explana-
tory variables. However, as we shall see in the example, a disjunctive encoding
of the explanatory variables can be used to conduct a linear discriminant anal-
ysis in the presence of such variables. Each category of the variable is there-
fore processed as a quantitative variable with values 0 or 1. In order to avoid
problems of collinearity between the different categories, the column which
is associated with the factors first category is not taken into account in the
model. When a discriminant analysis is conducted with one or more qualita-
tive explanatory variables, the assumption of normality is clearly not verified:
It is therefore important to be wary when interpreting posterior probabilities.
9.1.2 Example
We would like to explain the propensity to snore from six explanatory vari-
ables. The file snore.txt contains a sample of 100 patients. The variables
considered in the sample are
age: of the individual in years
weight: of the individual in kilograms (kg)
size: height of the individual in centimeters (cm)
alcohol: number of units drunk per day (each equal to one glass of red
wine)
sex: of the individual (W = woman, M = male)
snore: propensity to snore (Y = snores, N = does not snore)
tobacco: smoking behaviour (Y = smoker, N = non-smoker)
The aim of this study is to try to explain snoring (variable snore) using the
six other variables presented above. The following table presents an extract
from the dataset:
age weight size alcohol sex snore tobacco
1 47 71 158 0 M N Y
2 56 58 164 7 M Y N
.
. .
. .
. .
. .
. .
. .
. .
.
. . . . . . . .
99 68 108 194 0 W Y N
100 50 109 195 8 M Y Y
Linear Discriminant Analysis 183
9.1.3 Steps
1. Read the data.
2. Construct the model.
3. Estimate the classification error rate.
4. Make a prediction.
> summary(snore_data)
age weight size alcohol
Min. :23.00 Min. : 42.00 Min. :158.0 Min. : 0.00
1st Qu.:43.00 1st Qu.: 77.00 1st Qu.:166.0 1st Qu.: 0.00
Median :52.00 Median : 95.00 Median :186.0 Median : 2.00
Mean :52.27 Mean : 90.41 Mean :181.1 Mean : 2.95
3rd Qu.:62.25 3rd Qu.:107.00 3rd Qu.:194.0 3rd Qu.: 4.25
Max. :74.00 Max. :120.00 Max. :208.0 Max. :15.00
Y X1 + X2 .
The prior probabilities also need to be specified. If there are no prior assump-
tions about these two quantities, there are two possible strategies:
The probabilities are chosen to be equal, that is, 0 = 1 = 0.5;
The probabilities are estimated by the proportion of observations in each
group.
The choice of prior probabilities is specified by the prior argument of the lda
function. If nothing is specified, the second strategy is chosen by default. We
first write the model which takes all the explanatory variables into account:
184 R for Statistics
> library(MASS)
> model <- lda(snore~.,data=snore_data)
> model
Call:
lda(snore ~ ., data = snore_data)
Group means:
age weight size alcohol sexW tobaccoY
N 50.26154 90.47692 180.9538 2.369231 0.3076923 0.6769231
Y 56.00000 90.28571 181.3714 4.028571 0.1428571 0.5714286
In the output, we find the prior probabilities of the model, the centre of gravity
for each of the two groups, and the coefficients of the canonical variable. As a
rule, the software returns the coefficients of the canonical variable so that the
within-class variance equals 1. It is difficult to find the important variables for
the discrimination simply by comparing the centres of gravity; it is often more
relevant to study the influence of the coefficients on the score. For example,
it can be seen that, when all the other variables are equal, a man will have a
score 0.5541 higher than a woman. In the same way, a ten-year age difference
between two individuals who otherwise share the same characteristics will
contribute to an evolution in the score of 0.5974.
No statistical test can be used to test the significance of the coefficients of the
canonical variable. Nevertheless here we can see that the variables weight
and size are close to 0. It may therefore be interesting to delete these two
variables in the model as follows:
Group means:
age alcohol sexW tobaccoY
N 50.26154 2.369231 0.3076923 0.6769231
Y 56.00000 4.028571 0.1428571 0.5714286
It can be seen that the coefficients of the canonical variable are only slightly
affected by the removal of these two variables.
To impose equal prior probabilities on the model, we use the prior argument:
Group means:
age weight size alcohol sexW tobaccoY
N 50.26154 90.47692 180.9538 2.369231 0.3076923 0.6769231
Y 56.00000 90.28571 181.3714 4.028571 0.1428571 0.5714286
The modification of prior probabilities does not change the canonical variable.
186 R for Statistics
This choice only influences the way new individuals are allocated to a given
group.
pred N Y
N 53 22
Y 12 13
> sum(pred!=snore_data$snore)/nrow(snore_data)
[1] 0.34
The classification error rate is estimated in the same way for the other models
studied:
pred3 N Y
N 42 9
Y 23 26
> sum(pred3!=snore_data$snore)/nrow(snore_data)
[1] 0.32
In order to predict the labels of these four new individuals, we first collect
the new data in a data-frame with the same structure as the initial data table
(and in particular the same names for variables). We thus construct a new
matrix for the quantitative variables and another for the qualitative variables
before grouping them together in the same data-frame:
The predict function can be used to allocate each of these four individuals a
group:
> predict(model,newdata=new_d)
$class
[1] N N N Y
Levels: N Y
188 R for Statistics
$posterior
N Y
1 0.7957279 0.2042721
2 0.6095636 0.3904364
3 0.7325932 0.2674068
4 0.3532721 0.6467279
$x
LD1
1 -0.6238163
2 0.3246422
3 -0.2586916
4 1.4140108
X|Y = 0 N (0 , 0 ), X|Y = 1 N (1 , 1 )
The qda function, available in the MASS package, can be used to conduct a
quadratic discriminant analysis in R. It is used in the same way as lda.
Linear Discriminant Analysis 189
9.2.2 Example
Treatment for prostate cancer changes depending on whether or not the lym-
phatic nodes surrounding the prostate are affected. In order to avoid an inva-
sive investigative procedure (opening the abdominal cavity), a certain number
of variables are considered explanatory variables for the binary variable Y : if
Y = 0, the cancer has not reached the lymphatic system; if Y = 1, the can-
cer has reached the lymphatic system. The aim of this study is therefore to
explain and predict Y from the following variables:
Age of the patient at the time of diagnosis (age)
Serum acid phosphatase level (acid)
Result of an X-ray analysis, 0 = negative, 1 = positive (Xray)
Size of the tumour, 0 = small, 1 = large (size)
State of the tumour as determined by a biopsy, 0 = medium, 1 = serious
(grade)
Logarithm of the acidity level (log.acid)
The following table presents an extract from the dataset:
age acid Xray size grade Y log.acid
1 66 0.48 0 0 0 0 -0.7339692
2 68 0.56 0 0 0 0 -0.5798185
.
. .
. .
. .
. .
. .
. .
. .
.
. . . . . . . .
52 64 0.89 1 1 0 1 -0.1165338
53 68 1.26 1 1 1 1 0.2311117
Logistic Regression 191
9.2.3 Steps
1. Read the data.
2. Construct the model.
3. Choose the model.
4. Make a prediction.
The variables Xray, size, grade and Y must be converted into factors:
> summary(prostate)
age acid Xray size grade Y log.acid
Min. :45.00 Min. :0.4000 0:38 0:26 0:33 0:33 Min. :-0.9163
1st Qu.:56.00 1st Qu.:0.5000 1:15 1:27 1:20 1:20 1st Qu.:-0.6931
Median :60.00 Median :0.6500 Median :-0.4308
Mean :59.38 Mean :0.6942 Mean :-0.4189
3rd Qu.:65.00 3rd Qu.:0.7800 3rd Qu.:-0.2485
Max. :68.00 Max. :1.8700 Max. : 0.6259
Y X1 + X2 .
192 R for Statistics
> model_log.acid<-glm(Y~log.acid,data=prostate,family=binomial)
> summary(model_log.acid)
Call:
glm(formula = Y ~ log.acid, family = binomial, data = prostate)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.9802 -0.9095 -0.7265 1.1951 1.7302
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.404 0.509 0.794 0.4274
log.acid 2.245 1.040 2.159 0.0309 *
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
The output yields a Coefficients matrix with four columns for each parame-
ter (row): estimations of parameters (column Estimate), estimated standard
deviations of the parameters (Std. Error), observed values of the test statis-
tic (z value), and p-values (Pr(>|z|)), which are the probability that, under
the null hypothesis, the test statistics will exceed the estimated values of the
parameters.
The coefficients 0 and 1 are estimated by the method of maximum likeli-
hood. Here 0 = 0.404 and 1 = 2.245; we therefore obtain the model:
p(x)
log = 0.404 + 2.245x
1 p(x)
or equivalently,
exp(0.404 + 2.245x)
p(x) =
1 + exp(0.404 + 2.245x)
H0 : 1 = 0 against H1 : 1 6= 0
2 1 0 1 2
log.acid
Figure 9.1
Estimated values of p(x) for the model model log.acid.
Note that
0.5 if x 0.18
p(x) = P(Y
b = 1|X = x)
> 0.5 if x > 0.18
For the values of x exceeding 0.18, the model will thus predict the value
Y = 1 (cancer has reached the lymphatic system) whereas for values less than
or equal to 0.18, it will predict Y = 0 (the cancer has not yet reached the
lymphatic system).
It is of course possible to consider other logistic models. For example, the
model with only the variable size as explanatory variable:
> model_size
Coefficients:
(Intercept) size1
-1.435 1.658
In this case, only the coefficient associated with the level 1 of the variable size
is estimated (the coefficient associated with the level 0 is by default taken to
be zero). We obtain
p(x) 1.435 if x = 0
log =
1 p(x) 1.435 + 1.658 = 0.223 if x = 1.
The model which includes all the explanatory variables can also be studied:
Coefficients:
(Intercept) age acid Xray1 size1
10.08672 -0.04289 -8.48006 2.06673 1.38415
grade1 log.acid
0.85376 9.60912
> anova(model_log.acid,model,test="Chisq")
Analysis of Deviance Table
Model 1: Y ~ log.acid
Model 2: Y ~ age + acid + Xray + size + grade + log.acid
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 51 64.813
2 46 44.768 5 20.044 0.001226 **
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
As the p-value of the test is 0.001226, the null hypothesis is rejected. Thus
the model model log.acid is rejected and at least one of the supplementary
variables of model is considered to be relevant.
The anova function was used here to compare two models. Besides, the step
function is used to choose a model with the help of a step-by-step procedure
based on minimising the AIC value (Akaike Information Criterion):
Coefficients:
(Intercept) acid Xray1 size1 log.acid
9.067 -9.862 2.093 1.591 10.410
this example, the variables age and grade have been removed. In order to
use this backward method, it must first be ensured that the model used in
the argument for the step function contains a sufficient number of variables.
We could have used a more general model involving certain interactions as
the initial model for the procedure. The step function can also be used to
conduct forward or progressive (simultaneously forward and backward) step-
by-step procedures.
4. Making a prediction:
The logistic models constructed previously can be used for prediction. The
table below contains measurements from four new patients for the six explana-
tory variables:
Remember, here the variables Xray, size, and grade must be converted into
factors:
The predict function is used to predict the probabilities P(Y = 1|X = x) for
each of these new individuals:
For the first two individuals, the predicted probabilities are less than 0.5.
We therefore predict Y = 0, that is to say, the cancer has not reached the
lymphatic system, whereas for the last two individuals we predict that the
cancer has indeed reached the lymphatic system.
Logistic Regression 197
The predict function can also be used to estimate the models misclassification
rate (proportion of errors made by the model when applied to the available
sample):
(a) First calculate the probabilities predicted for each individual from the
sample:
> prediction_prob <- predict(model_step,newdata=prostate,
type="response")
(b) Compare these probabilities to 0.5 in order to obtain the predicted label
for each individual:
> prediction_label <- as.numeric(prediction_prob>0.5)
(c) Draw up the contingency table with the predicted labels and the true
labels:
> table(prostate$Y,prediction_label)
prediction_label
0 1
0 28 5
1 6 14
We thus obtain the models misclassification rate:
> error <- sum(prediction_label!=prostate$Y)/nrow(prostate)
> error
[1] 0.2075472
The rate obtained in this way is generally optimistic as the same sample is
used to construct the model and to estimate the misclassification rate. We
can obtain a more precise estimation using cross-validation methods. In order
to do so, we use the function cv.glm. First of all, the package boot must be
loaded and a cost function created which admits the observed values of Y as
well as the predicted probabilities as input:
> library(boot)
> cost <- function(Y_obs,prediction_prob)
+ return(mean(abs(Y_obs-prediction_prob)>0.5))
> cv.glm(prostate,model_step,cost)$delta[1]
1
0.2830189
Remember, here the variables Xray, size, grade and Y must be converted
into factors: Data Manage variables in active data set Convert
numeric variables to factors....
To check that the dataset has been imported successfully:
Statistics Summaries Active data set
2. Constructing the model:
Statistics Fit models Generalized linear model....
To write the model:
(a) Double-click on the response variable.
(b) Next choose the explanatory variables, separating them with a + and
a : when an interaction is required.
9.3.2 Example
As an illustration we shall study the example of treating prostate cancer which
is explained in detail in Worked Example 9.2. The objective remains the
same: to predict Y (has the cancer reached the lymphatic system?) using the
variables age, acid, Xray, size, grade and log.acid. The response variable
is qualitative and we will therefore construct a classification tree.
9.3.3 Steps
1. Read the data.
2. Construct and analyse the decision tree.
3. Choose the size of the tree.
4. Estimate the classification error rate.
Import the data from file prostate.txt and convert the variables Xray, size,
grade and Y into factors:
> summary(prostate)
age acid Xray size grade Y
Min. :45.00 Min. :0.4000 0:38 0:26 0:33 0:33
1st Qu.:56.00 1st Qu.:0.5000 1:15 1:27 1:20 1:20
Median :60.00 Median :0.6500
Mean :59.38 Mean :0.6942
3rd Qu.:65.00 3rd Qu.:0.7800
Max. :68.00 Max. :1.8700
log.acid
Min. :-0.9163
1st Qu.:-0.6931
Median :-0.4308
Mean :-0.4189
3rd Qu.:-0.2485
Max. : 0.6259
> library(rpart)
> prostate.tree <- rpart(Y~.,data=prostate)
> prostate.tree
n= 53
Each node of the tree corresponds to a group of individuals, for which R yields
Node number: node, for example, 3)
Decision Tree 201
Thus in the case of the first node (called the root of the tree), it can be
seen that there are 53 observations (all the individuals), and that for 62.3%
of those individuals, cancer did not spread to the lymphatic system (Y = 0).
This first node is segmented using the variable acid. The individuals are
divided into two sub-groups: the first (node 2, acid < 0.665) is composed of
28 individuals, 85.7% of which correspond to Y = 0 and 14.3% to Y = 1. The
second (node 3, acid >= 0.665) is composed of 25 individuals (36% Y = 0,
64% Y = 1). Node 2 is not segmented whereas node 3 is segmented according
to the variable Xray, which yields nodes 6 and 7.
When the algorithm can no longer be segmented into nodes, we end by a leaf.
Thus, in our example we obtain three leaves which correspond to nodes 2, 6
and 7, and which can be identified in the script by the symbol *. For a pure
leaf, we obtain a group of individuals from just one class (example: in leaf 7,
100% of the individuals have a value of Y = 1).
Once the tree has been constructed, the class assigned to each leaf must of
course be defined. For a pure leaf, the answer is obvious. If the leaf is not pure,
a simple rule is to decide to assign it the class which is the most represented
in the leaf. Thus, if we examine leaf 6 (56.25% Y = 0, 43.75% Y = 1), we can
assume the following attribution rule: if acid > 0.665 and Xray = 0 then
Y = 0.
The plot function is used to obtain the tree which represents this segmentation
graphically (Figure 9.2). A number of graphical parameters are used to obtain
a tree which is easier to read:
> plot(prostate.tree,branch=0.2,compress=T,margin=0.1,
main="Prostate.tree")
> text(prostate.tree,fancy=T,use.n=T,pretty=0,all=T)
0
|
33/20
acid< 0.665
acid>=0.665
0 1
24/4 9/16
Xray=0
Xray=1
0 1
9/7 0/9
Figure 9.2
Decision tree: predict variable Y (prostate dataset).
To analyse a tree, it is possible to read all the results using the summary
function. The advantage of this summary is that it lists the variables that
compete with those that are chosen. For each node, the summary therefore
uses Primary splits to return the primary variables which could have been
chosen (as second, third or fourth choice depending on the maxcompete argu-
ment).
Here is an extract of the summary:
Call:
rpart(formula = Y ~ ., data = prostate, maxcompete = 2)
n= 53
Decision Tree 203
At the first node level, the two competing variables are log.acid and Xray,
and the substitute variable is log.acid. It is clear that for the trees, the
choice of the level at which the cuts of continuous variables are made is de-
cided according to rank, which explains why the variables acid and log.acid
provide exactly the same information. The surrogate splits can be used to
handle missing values.
Prostate.tree2
acid< 0.665
|
age>=58.5 Xray=0
Xray=0
0
acid>=0.705
0 1 1
acid< 0.8
1
acid>=0.825
0
0 1
Figure 9.3
Tree built using minsplit=5.
1 2 4 6 8
1.2
1.0
Xval Relative Error
0.8
0.6
0.4
cp
Figure 9.4
Choosing the size of the tree using plotcp.
It must be noted here that only the variables acid and Xray are involved in
predicting Y (Figure 9.5). Misclassifications are now equal to 4+3+0+0 = 7.
4. Estimating the classification error rate:
To estimate the rate of misclassification of the final tree, we can read the
number of classification errors on the graph in Figure 9.5, or calculate it
directly using the object prostate.fin:
206 R for Statistics
Prostate.fin
0
|
33/20
acid< 0.665
acid>=0.665
0 1
24/4 9/16
Xray=0
Xray=1
0 1
9/7 0/9
acid>=0.705
acid< 0.705
0 1
9/3 0/4
Figure 9.5
Final tree obtained using cross-validation.
Here the error rate of around 13.2% is calculated from the data which was used
to construct the tree and therefore underestimates the real error rate even if
the tree was chosen by cross-validation. To estimate the true rate, a forecast
is made using cross-validation (the aforementioned leave-one-out method):
The estimated error rate is 20.8%. It can be seen that the rate is indeed greater
than the misclassification rate estimated without cross-validation (around
13.2%).
5. Predicting Y for new individuals:
The above decision tree can be used in the context of forecasting. The follow-
ing table contains measurements from four new patients for the six explanatory
variables:
To obtain the values of Y predicted by the decision tree for these four new
individuals, we first bring together the new data in a data-frame with the
same structure as the initial data table (and especially the same names for
the variables):
For the first two individuals, the tree predicts Y = 0, which means that
the cancer has not yet reached the lymphatic system, whereas for the last
two individuals, the tree predicts Y = 1, that is, the cancer has reached the
lymphatic system.
More precisely, for these individuals, we can also examine the probability of
predicting Y = 1 (rather than Y directly):
We can consider that the risk of being mistaken when we predict Y = 0 for
the first two individuals is rather low as, according to the tree, there is less
than a 25% risk that the cancer has reached the lymphatic system.
10.1.2 Example
The dataset involves the results of decathlon events at two athletics compe-
titions which took place one month apart: the Athens Olympic Games (23rd
and 24th August 2004) and Decastar (25th and 26th September 2004). On
the first day, the athletes compete in five events (100 m, long jump, shot put,
high jump, 400 m) and in the remaining events on the second day (110 m
hurdles, discus, pole vault, javelin, 1500 m). In Table 10.1, we have grouped
together the performances for each athlete at each of the ten events, their
overall rank, their final number of points and the competition in which they
participated.
The aim of conducting PCA on this dataset is to determine profiles for
similar performances: are there any athletes who are better at endurance
events or those requiring short bursts of energy? And are some of the events
similar? If an athlete performs well in one event, will he necessarily perform
well in another?
210 R for Statistics
TABLE 10.1
Athletes Performances in ten Decathlon Events
1500m
Points
Comp
100m
400m
110m
Rank
Long
High
Shot
Jave
Disc
Pole
Sebrle 10.85 7.84 16.36 2.12 48.36 14.05 48.72 5.00 70.52 280.01 1 8893 OG
Clay 10.44 7.96 15.23 2.06 49.19 14.13 50.11 4.90 69.71 282 2 8820 OG
Karpov 10.5 7.81 15.93 2.09 46.81 13.97 51.65 4.60 55.54 278.11 3 8725 OG
Macey 10.89 7.47 15.73 2.15 48.97 14.56 48.34 4.40 58.46 265.42 4 8414 OG
Warners 10.62 7.74 14.48 1.97 47.97 14.01 43.73 4.90 55.39 278.05 5 8343 OG
Zsivoczky 10.91 7.14 15.31 2.12 49.40 14.95 45.62 4.70 63.45 269.54 6 8287 OG
Hernu 10.97 7.19 14.65 2.03 48.73 14.25 44.72 4.80 57.76 264.35 7 8237 OG
.. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . .
SEBRLE 11.04 7.58 14.83 2.07 49.81 14.69 43.75 5.02 63.19 291.70 1 8217 Deca
CLAY 10.76 7.40 14.26 1.86 49.37 14.05 50.72 4.92 60.15 301.50 2 8122 Deca
.. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . .
10.1.3 Steps
1. Read the data.
2. Choose the active individuals and variables.
3. Choose if the variables need to be standardised.
The choice of active variables is very important: these variables, and these
variables alone, will be taken into account when constructing the PCA dimen-
sions. In other words, these are the only variables which are used to calculate
the distances between individuals. As supplementary variables, we can add
the quantitative variables number of points and rank, and the qualitative
variable competition. Supplementary variables are very useful in helping to
interpret the dimensions.
We can also choose the individuals which participate in constructing the di-
mensions, known as active individuals. Here, as is often the case, all the
individuals are considered active.
3. Choosing if the variables need to be standardised:
We can centre and standardise the variables or simply centre them. For the
dataset processed in this example, we do not have the choice. Standardisation
is essential as the variables are of different units. When the variables are of the
same units, both solutions are possible and lead to two separate analyses. This
decision is therefore crucial. Standardisation means that every variable has
the same importance. Not standardising the data therefore means that each
variable has a weight corresponding to its standard deviation. This choice is
all the more important as the variables have different variances.
To conduct PCA using the FactoMineR package (see Appendix A.4, p. 269),
the preinstalled package must be loaded (see Section 1.6). We then use the
PCA function.
> barplot(res.pca$eig[,2],names=paste("Dim",1:nrow(res.pca$eig)))
212 R for Statistics
10 15 20 25 30
5
0
Dim 1 Dim 2 Dim 3 Dim 4 Dim 5 Dim 6 Dim 7 Dim 8 Dim 9 Dim 10
Figure 10.1
Percentage of variance associated with each dimension of the PCA.
We are therefore looking for a visible decrease or gap in the bar chart. Here,
we can analyse the first four dimensions. Indeed, after four dimensions, we
remark a regular decrease in the percentage of variance and can see a small
jump between dimensions 4 and 5. To know the percentage of variance
explained by the first four dimensions, we use the following line of code:
> round(res.pca$eig[1:4,],2)
eigenvalue percentage of cumulative percentage
variance of variance
comp 1 3.27 32.72 32.72
comp 2 1.74 17.37 50.09
comp 3 1.40 14.05 64.14
comp 4 1.06 10.57 74.71
The first two dimensions express 50% of the total percentage of variance, that
is to say, 50% of the information in the data table is contained within the first
two dimensions. This means that the diversity of the performance profiles
cannot be summarised by two dimensions. Here, the first four dimensions are
sufficient to explain 75% of the total variance.
5. Analysing the results:
By default, the PCA function yields the graph of individuals (Figure 10.2)
and the graph of variables for the first two dimensions (Figure 10.3). The
supplementary qualitative variable is represented on the graph of individu-
als. Each category of the qualitative variable competition is represented at
the barycentre of the individuals which take this category. The supplemen-
tary quantitative variables are represented on the graph of variables. The
correlation between these variables and the dimensions can be interpreted.
Principal Component Analysis 213
Casarsa
4
Korkizoglou YURKOV
Parkhomenko
2
Sebrle
Zsivoczky
Smith Macey
Pogorelov Clay
Dim 2 (17.37%)
BARRAS Qi Schoenbeck
Karlivans
Ojaniemi Hernu
BERNARD
Smirnov ZSIVOCZKY
Gomez Schwarzl
Lorenzo
Nool Averyanov Warners
WARNERS
-2
NOOL
Drews
-6 -4 -2 0 2 4
Dim 1 (32.72%)
Figure 10.2
Decathlon PCA: graph of individuals (results of Decastar shown in capitals).
1.0
Discus
400m Shot.put
1500m
0.5
Javeline
High.jump
110m.hurdle
Dim 2 (17.37%)
100m
Rank
0.0
Points
Pole.vault
Long.jump
-0.5
-1.0
Dim 1 (32.72%)
Figure 10.3
Decathlon PCA: graph of variables.
summary. Of course, the number of points was not used in the automatic
summary, as this variable is purely illustrative.
We can interpret the position of the barycentres of the competition variable on
the graph of individuals. The Olympics category has a positive coordinate on
dimension 1 whereas the Decastar category has a negative coordinate. We can
therefore consider that, on average, athletes perform better at the Olympics
than at Decastar.
Dimension 2 is positively correlated to endurance variables (1500 m, 400 m)
and power variables (discus, shot put). It means that dimension 2 opposes
tough athletes (at the bottom since their time for 1500 m and 400 m is small)
to powerful athletes on the top. It can be noted that both the categories
of the qualitative variable competition have null coordinates on dimension
2, showing that overall there is no evolution between the Olympics and the
Decastar from the second dimension point of view: this means that the ath-
letes improved their overall performance (see dimension 1) but their profile
did not change.
These interpretations can be refined using numerical results. The individuals
Principal Component Analysis 215
> round(cbind(res.pca$ind$coord[,1:3],res.pca$ind$cos2[,1:3],
res.pca$ind$contrib[,1:3]), digits=2)
Dim.1 Dim.2 Dim.3 Dim.1 Dim.2 Dim.3 Dim.1 Dim.2 Dim.3
Sebrle 4.04 1.37 -0.29 0.70 0.08 0.00 12.16 2.62 0.15
Clay 3.92 0.84 0.23 0.71 0.03 0.00 11.45 0.98 0.09
Karpov 4.62 0.04 -0.04 0.85 0.00 0.00 15.91 0.00 0.00
Macey 2.23 1.04 -1.86 0.42 0.09 0.29 3.72 1.52 6.03
.
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
. . . . . . . . . .
The list res.pca$var yields similar results for the variables: the coordinates,
the squared cosines and the contributions of the variables. Loadings can be
obtained by dividing the coordinates of the variables on a dimension by the
square root eigenvalue of the corresponding dimension:
> loadings<-sweep(res$var$coord,2,sqrt(res$eig[1:5,1]),FUN="/")
We were only interested in the results of the first two dimensions, but it is
possible to construct graphs of individuals and variables on dimensions 3 and
4:
To save a graph in pdf format, for example, we use the argument new.plot=F:
> pdf("mypath/mygraph.pdf")
> plot(res.pca, choix="var", habillage=13, axes=3:4, new.plot=F)
> dev.off()
> dimdesc(res.pca)
$Dim.1
$Dim.1$quanti
correlation P-value
Points 0.9561543 2.099191e-22
Long.jump 0.7418997 2.849886e-08
Shot.put 0.6225026 1.388321e-05
High.jump 0.5719453 9.362285e-05
Discus 0.5524665 1.802220e-04
Rank -0.6705104 1.616348e-06
X400m -0.6796099 1.028175e-06
X110m.hurdle -0.7462453 2.136962e-08
X100m -0.7747198 2.778467e-09
$Dim.2
$Dim.2$quanti
correlation P-value
Discus 0.6063134 2.650745e-05
Shot.put 0.5983033 3.603567e-05
X400m 0.5694378 1.020941e-04
X1500m 0.4742238 1.734405e-03
Principal Component Analysis 217
This function becomes very useful when there are a lot of variables. Here
we can see that the first dimension is primarily linked to the variable Points
(correlation coefficient of 0.96), and then to the variable 100m (negative cor-
relation of 0.77), etc. The second dimension is principally described by the
quantitative variables Discus and Shot put. None of the qualitative variables
can be used to characterise dimensions 1 and 2 with a confidence level of 95%.
The confidence level can be changed (proba = 0.2) and, for the qualitative
variables, yields
$Dim.1$category
Estimate P-value
OG 0.4393744 0.1552515
Decastar -0.4393744 0.1552515
> round(scale(decath[,1:12]),2)
100m Long Shot High 400m ...
Sebrle -0.56 1.83 2.28 1.61 -1.09 ...
Clay -2.12 2.21 0.91 0.94 -0.37 ...
Karpov -1.89 1.74 1.76 1.27 -2.43 ...
Macey -0.41 0.66 1.52 1.95 -0.56 ...
Warners -1.44 1.52 0.00 -0.08 -1.43 ...
.
. .
. .
. .
. .
. .
. .
.
. . . . . . .
For example, Sebrle throws the shot put further than average (positive stan-
dardised value) somewhat remarkably (extreme value as it is greater than
2).
In the same way we can calculate the matrix of the correlations to know the
exact correlation coefficient between the variables rather than the approxima-
tions provided by the graph:
218 R for Statistics
> round(cor(decath[,1:12]),2)
100m Long Shot High 400m ...
100m 1.00 -0.60 -0.36 -0.25 0.52 ...
Long -0.60 1.00 0.18 0.29 -0.60 ...
Shot -0.36 0.18 1.00 0.49 -0.14 ...
High -0.25 0.29 0.49 1.00 -0.19 ...
400m 0.52 -0.60 -0.14 -0.19 1.00 ...
.
. .
. .
. .
. .
. .
. .
.
. . . . . . .
Figure 10.4
PCA main window in the FactoMineR menu.
(without the individuals Hide some elements: check ind); it is also possible
to omit the labels for the individuals (Label for the active individuals).
The individuals can be coloured according to a qualitative variable (Coloring
for individuals: choose the qualitative variable).
The window for the different output options can be used to visualise the
different results (eigenvalues, individuals, variables, automatic description of
the dimensions). All the results can also be exported into a csv file (which
can be opened in spreadsheet such as Excel, for example).
Figure 10.5
PCA graphical options window.
> plotellipses(res.pca,keepvar=13)
Decastar
OlympicG
Casarsa
4
Parkhomenko YURKOV
Korkizoglou
2
Sebrle
Zsivoczky Macey
Smith
Dim 2 (17.37%)
Clay
MARTINEAU HERNU TerekPogorelov SEBRLE CLAY
Barras
BOURGUIGNON Uldal KARPOV
Turi Decastar McMULLEN Karpov
BARRAS OlympicG
0
Qi Schoenbeck Bernard
Karlivans Hernu
BERNARD Ojaniemi
Smirnov ZSIVOCZKY
Lorenzo Gomez Averyanov
Schwarzl Nool Warners
WARNERS
-2
NOOL
Drews
-4
-6 -4 -2 0 2 4 6
Dim 1 (32.72%)
Figure 10.6
Confidence ellipses for the categories of a qualitative variable in PCA.
222 R for Statistics
10.2.2 Example
The dataset represents the number of students from French universities by
subject (or discipline) and by course by sex for the academic year 2007
2008. This is a table confronting the two qualitative variables Discipline
and Level-sex. In the rows it features ten disciplines offered by the uni-
versity and in the columns the junctions of the variables level (bachelors,
masters and PhD) and sex (male or female). The CA is thus applied between
one qualitative variable Discipline and a qualitative variable Level-sex cor-
responding to the confrontation of these two variables. This situation occurs
frequently in correspondence analysis. Furthermore, for discipline, we also
have the total number of students by level, by sex and an overall total (see
Table 10.2).
The aim of this study is to have an overall image of the university. Which
disciplines have the same student profiles? Which disciplines are favoured by
women (and men respectively)? Which disciplines tend to have the longest
courses of study?
10.2.3 Steps
1. Read the data.
2. Choose the active rows and columns.
3. Conduct the CA.
4. Choose the number of dimensions.
Correspondence Analysis 223
TABLE 10.2
University Datafile
Bachelors Masters ... Sum
F M F M
Law, Political Science 69373 37317 42371 21693 ... 179125
Social Science, Economics, Management 38387 37157 29466 26929 ... 136474
Economic and Social Administration 18574 12388 4183 2884 ... 38029
Literature, Linguistics, Arts 48691 17850 17672 5853 ... 96998
Languages 62736 21291 13186 3874 ... 103833
Social Sciences and Humanities 94346 41050 43016 20447 ... 213618
Joint Humanities 1779 726 2356 811 ... 5700
Mathematics, Physics 22559 54861 17078 48293 ... 158689
Life Science 24318 15004 11090 8457 ... 69742
Sport Science 8248 17253 1963 4172 ... 32152
> library(FactoMineR)
> res.ca <- CA(University, col.sup=7:12)
> barplot(res.ca$eig[,2],names=paste("Dim",1:nrow(res.ca$eig)))
70
50
30
0 10
Figure 10.7
Percentage of variance associated with each dimension of the CA.
We can also directly examine the percentages of variance associated with each
dimension using the following command:
> round(res.ca$eig,3)
eigenvalue percentage of cumulative percentage
variance of variance
Correspondence Analysis 225
The percentages of variance associated with the first dimensions are high, and
we can simply describe the first three dimensions, as confirmed by the jump
seen in Figure 10.7. The first three dimensions express nearly 97% of the
total variance: In other words, 97% of the information in the data table is
summarised by the first three dimensions.
5. Analysing the results:
By default, the CA function yields a graph simultaneously representing (active
and illustrative) rows and columns on the first two dimensions. In Figure 10.8
CA factor map
PhD.F
PhD
0.4
PhD.M
Joint humanities
Life science
Masters.F
0.2
Masters
Social sciences and Humanities
Literature, Masters.M
Law, political science
Linguistics, Sum.F
Dim 2 (15.51%)
0.0
Bachelors.M
Sport science
-0.6
Dim 1 (70.72%)
Figure 10.8
CA on university data.
226 R for Statistics
we can view the major tendencies emanating from the analysis. If the graph
contains too many points and becomes difficult to read, we can, for example,
represent the rows alone using the invisible argument of the plot.CA:
First of all it must be noted that, by design, there can be no size effect in
CA as each row (and column respectively) is divided by its margin. Thus, in
the example, we do not see on one side the disciplines which attract a lot of
students and on the other those which attract the fewest.
We can also examine proximities between the disciplines. Two disciplines
are considered similar if they have the same profiles (they attract students
of the same sex and for the same course length). For example, the graph
shows that the disciplines Languages and Literature, Linguistics, Arts
mainly attract women at Bachelors level. The supplementary columns can
help in interpreting the CA graph. Thus, the first dimensions opposes men
and women, whereas the second dimensions classes the levels from Bachelors
(at the bottom) to PhD (at the top). The disciplines on the left of the graph
(or right, respectively) are mainly studied by women (or men, respectively):
literary disciplines are towards the left of the graph (and therefore studied
mainly by women) and scientific disciplines are towards the right of the graph
(and thus studied by men). The disciplines towards the bottom of the graph
are those with short courses (Bachelor level over-represented by social and
economic administration and sport, for example), whereas the disciplines to-
wards the top of the graph have longer courses (PhD level over-represented
for biological and life sciences, for example). However, it is not always easy
to interpret the dimensions of the CA. In this case, we focus on the proximity
between the categories.
To refine these interpretations, we can go on to examine all the numerical
results. The numerical results of the rows are contained within the object
res.ca$row. We thus obtain a table with the row coordinates on the dimen-
sions, a table with the cosines squared (which represent the quality of the
projection of a row category), and a table with the contributions of the rows
to the construction of the dimensions (not given here):
> round(cbind(res.ca$row$coord[,1:3],res.ca$row$cos2[,1:3]),2)
Dim 1 Dim 2 Dim 3 Dim 1 Dim 2 Dim 3
Law, political science -0.10 0.07 -0.13 0.30 0.13 0.55
Social sc., Eco., Management 0.18 -0.02 -0.20 0.46 0.00 0.52
Economic and Social Admin. -0.19 -0.37 0.01 0.20 0.80 0.00
Literature, Ling., Arts -0.32 0.05 0.08 0.91 0.02 0.06
Languages -0.45 -0.18 0.09 0.79 0.13 0.03
Social sc. and Humanities -0.19 0.08 0.01 0.84 0.15 0.00
Joint humanities -0.13 0.28 -0.58 0.04 0.18 0.77
Mathematics, Physics 0.67 0.01 0.08 0.98 0.00 0.01
Correspondence Analysis 227
> round(cbind(res.ca$col$coord[,1:3],res.ca$col$cos2[,1:3],
res.ca$col$contrib[,1:3]),2)
Dim 1 Dim 2 Dim 3 Dim 1 Dim 2 Dim 3 Dim 1 Dim 2 Dim 3
Bachel.F -0.35 -0.04 0.04 0.96 0.01 0.01 39.72 2.27 3.65
Bachel.M 0.23 -0.20 0.03 0.55 0.39 0.01 11.51 37.49 1.21
Masters.F -0.11 0.17 -0.21 0.14 0.33 0.49 1.99 20.90 43.75
Masters.M 0.58 0.06 -0.07 0.95 0.01 0.01 40.43 1.97 3.26
PhD.F -0.06 0.43 0.39 0.01 0.49 0.42 0.08 21.02 25.41
PhD.M 0.47 0.36 0.35 0.46 0.26 0.26 6.27 16.36 22.71
We can then examine the results concerning the supplementary elements con-
tained within the object res.ca$col.sup:
> round(cbind(res.ca$col.sup$coord[,1:3],
res.ca$col.sup$cos2[,1:3]),2)
Dim 1 Dim 2 Dim 3 Dim 1 Dim 2 Dim 3
Sum.F -0.26 0.05 -0.02 0.44 0.02 0.00
Sum.M 0.37 -0.07 0.02 0.60 0.02 0.00
Bachelors -0.12 -0.10 0.04 0.13 0.09 0.01
Masters 0.19 0.12 -0.15 0.23 0.10 0.13
PhD 0.22 0.39 0.37 0.11 0.35 0.32
Sum 0.00 0.00 0.00 0.00 0.00 0.00
As expected, the coordinates of the total are at the barycentre of the cloud.
The clouds centre of gravity corresponds to the mean profile. This shows
that if the position of a Level-sex combination is close to the centre of the
cloud, this combination has the same discipline profile as that of the students
overall.
We can also construct the graph for dimensions 3 and 4:
list all the words used in the texts and then draw up a table with the texts in
the rows and the words in the columns. Within a cell, we specify the number
of times a word has been used in a text. Applying a CA to this kind of table
makes it possible to compare the two authors and to see which words are over-
or under-used by which texts.
10.3.2 Example
The dataset contains information from sixty-six clients who took out loans
from a credit company. The eleven qualitative variables and the associated
categories are as follows:
Loan: Renovation, Car, Scooter, Motorbike, Furnishings, Sidecar. This
variable represents the item for which clients took out a loan.
Deposit: yes, no. This variable indicates whether or not clients paid a
deposit before taking out the loan. A deposit represents a guarantee for
the loan organisation.
Unpaid: 0, 1 or 2, 3 and more. This variable indicates the number of
unpaid loan repayments for each client.
Debt load: 1 (low), 2, 3, 4 (high). This variable indicates the clients
debt load. The debt load is calculated as the ratio between costs (sum of
expenses) and income. The debt load is divided into four classes.
Insurance: No insurance, DI (Disability Insurance), TPD (Total Perma-
nent Disability), Senior (for people older than 60). This variable indicates
the type of insurance the client has taken out.
Family: Common-law, Married, Widower, Single, Divorcee.
Children: 0, 1, 2, 3, 4 and more.
Accommodation: Home owner, First-time buyers, Tenant, Lodged by fam-
ily, Lodged by employer.
Multiple Correspondence Analysis 231
10.3.3 Steps
1. Read the data.
2. Choose the active individuals and variables.
By examining the results yielded by the summary function, we remark that the
Age variable is not considered qualitative. It therefore needs to be converted:
In MCA it is important to check that there are no rare categories, that is,
those with very small frequencies, as MCA attributes a lot of importance to
these categories. To do this, we recommend representing each variable:
If certain variables admit categories with small frequencies, there are a number
of ways to deal with it:
> library(FactoMineR)
> res.mca <- MCA(Credit, quali.sup=6:11, level.ventil=0)
The MCA is constructed from the first five variables (the active variables)
whereas the variables 6 to 11 are supplementary. The argument level.ventil
is 0 by default, which means that no ventilation is conducted. If the argument
has a value of 0.05, this means that the categories with frequency equal to or
less than 5% of the total number of individuals are ventilated automatically
by the MCA function prior to constructing the MCA. By default, results are
given for the first five dimensions. To specify another number of dimensions,
use the argument ncp. The list res.mca contains all the results.
Multiple Correspondence Analysis 233
> barplot(res.mca$eig[,2],names=paste("Dim",1:nrow(res.mca$eig)))
14
12
10
8
6
4
2
0
Figure 10.9
Percentage of variance associated with each dimension of the MCA.
> round(res.mca$eig[1:5,],2)
eigenvalue percentage of cumulative percentage
variance of variance
dim 1 0.40 15.33 15.33
dim 2 0.32 12.38 27.71
dim 3 0.29 11.25 38.96
dim 4 0.27 10.47 49.43
dim 5 0.24 9.14 58.57
The first two dimensions express 28% of the total variance. In other words,
28% of the information in the data table is summarised by the first two di-
mensions, which is relatively high in MCA. We will interpret only the first two
dimensions even though it may be interesting to analyse dimensions 3 and 4.
4. Analysing the results:
The MCA function yields a graph simultaneously representing individuals and
categories (active and illustrative) on the first two dimensions. This repre-
sentation quickly becomes saturated and it is therefore necessary to conduct
234 R for Statistics
separate representations for the individuals and the categories of the vari-
ables. For this we use the invisible argument of the plot.MCA function. To
represent the individuals only, we use
33
L
1.0
66 55 30
49
35 L
L10
7 6 L
24 31 L
L L L L
4
56
0.5
8
361
L
34 39 32
36 L47
L L
22 40
L L L L
46
2
L
L 44
25
L
Dim 2 (12.38%)
L 1
23 L
51
13 5
28 L 12
14
15 26
17
L
58 65
L
0.0
L
L 425345 L
54
38 LL L
L LL
L64
27 21 11 L
9
18 37
L L L
LL L 29 48
59 L L
L
0.5
16
L
63 57
L L
19 43
62 L L
52
1.0
L
41 L
L
50
20
L
60
L
1.5
Dim 1 (15.33%)
Figure 10.10
MCA of credit data: representation of the individuals.
This graph of individuals (Figure 10.10) illustrates the general shape of the
scatterplot to see if, for example, it identifies particular groups of individuals.
This is not the case here. To make the results easier to interpret, it can
be helpful to colour the individuals according to a variable, with one colour
used for each category of the variable. We thus use the habillage argument
specifying either the name or the number of the qualitative variable (graph
not provided):
The graph with all the categories of the active and illustrative qualitative
Multiple Correspondence Analysis 235
variables (Figure 10.11) can be used to get an idea of the overall tenden-
cies emanating from the analysis. To construct this graph and represent the
categories alone, we use
Unp_1_ou_2
Child_4
1.0
Senior Scooter
60
Retired Renovation
Widower Debt_3
Child_1
Home owner
0.5
50 First-time buyers
Debt_2 No insurance Common-law
Management Technician Child_3
0.0
Debt_4
Child_0 30 Lodged by the employer
40 20
no_deposit Tenant Car
Divorcee
Debt_1
-0.5
Single
Manual Labourer
Furnishings
Lodged by the family
-1.0
Unp_3 and +
Miss TPD
-1.5
-2 -1 0 1 2
Dim 1 (15.33%)
Figure 10.11
MCA of credit data: representation of active and illustrative categories.
The first dimension of Figure 10.11 opposes a young profile (on the right)
with an old profile (on the left). We thus find older people, home-owners
who have taken out a loan to finance renovation work, opposing younger people
who have taken out a loan, for example, to buy a scooter. Young people
tend to have a relatively high debt load.
Dimension 2 mainly opposes people in great financial difficulty (below) with
the others (above). We can therefore identify people who have difficulty paying
back their loan (three or more payments missed) and those who have taken
out TPD insurance. This dimension can thus be qualified as the dimension of
financial difficulty.
Using a number of examples, we evoke the general rules for interpreting the
proximity between categories of two variables, be they the same or differ-
ent. For example, the category Senior of the variable Insurance is close to
236 R for Statistics
Variables representation
1.0
0.8
0.6
Dim 2 (12.38%)
Unpaid Insurance
Loan
0.4
Profession
Family Debt load
Accommodation Age
0.2
Title Children
Deposit
0.0
Figure 10.12
MCA of credit data: representation of active and illustrative variables.
Multiple Correspondence Analysis 237
The variable which contributes the most to the creation of dimension 1 is the
Loan variable, and that which contributes the most to the creation of dimen-
sion 2 is the Unpaid variable (Figure 10.12). This information summarises
the overall influence of all the categories of each of the variables to the con-
struction of the dimensions. In this way, the categories of the Loan variable
contribute greatly to the creation of dimension 1 and the categories of the
Unpaid variable to the creation of dimension 2. We can then examine the
results concerning the supplementary qualitative variables contained within
the object res.mca$quali.sup.
It is possible to construct the graphs of individuals and variables on dimensions
3 and 4:
To save a graph in pdf format for example, we use the argument new.plot=F:
> pdf("mypath/mygraph.pdf")
> plot(res.mca,invisible=c("var","quali.sup"),axes=3:4,
new.plot=FALSE)
> dev.off()
> dimdesc(res.mca)
238 R for Statistics
$Dim 2
$Dim 2$quali
R2 p.value
Unpaid 0.4679498 2.330816e-09
Insurance 0.4620452 1.979222e-08
Loan 0.3895658 3.732164e-06
Profession 0.2440563 1.661355e-03
Debt.load 0.1865282 4.857269e-03
Deposit 0.1033137 8.498296e-03
Family 0.1945394 9.447136e-03
Age 0.1781788 1.618904e-02
Accommodation 0.1734846 1.883183e-02
Title 0.1101503 2.532061e-02
$Dim 2$category
Estimate p.value
Unp_1_ou_2 0.5997078 6.148708e-07
Senior 0.5149935 2.212144e-05
Retired 0.4477685 1.257211e-03
60 0.4556995 1.538945e-03
Debt_3 0.3550533 1.640259e-03
Renovation 0.3127094 4.448861e-03
Home owner 0.3432251 6.371855e-03
Deposit 0.1826945 8.498296e-03
Scooter 0.4031618 1.373824e-02
DI 0.1652700 4.212931e-02
Widower 0.3640249 4.252309e-02
Lodged by the family -0.4156119 3.553903e-02
Child_0 -0.2990457 2.172157e-02
Single -0.3322199 1.086364e-02
Debt_1 -0.2911376 1.009127e-02
no_deposit -0.1826945 8.498296e-03
Miss -0.4824068 8.440543e-03
Car -0.2954665 8.029752e-03
Manual Labourer -0.4605202 1.323161e-03
Furnishings -0.5179085 1.049530e-05
TPD -0.6985157 3.360963e-09
Unp_3 and + -0.6634535 7.988517e-10
This function becomes very useful when there are a lot of categories. Here
we find that the second dimension is mainly linked to the categories TPD and
Unp 3 and +.
6. Going back to raw data in cross-tabulation:
It is interesting to go back to the raw data to analyse more closely the re-
lationship between two variables and particularly the proximity between the
Multiple Correspondence Analysis 239
for example. In this way, MCA can be seen as a pre-processing step that
can convert qualitative variables into quantitative ones. To implement unsu-
pervised classification on the principal components of the MCA, see Worked
Example 11.1.
2. Characterising the categories of one specific qualitative variable:
It can also be interesting to use the catdes function (see Worked Example 11.1)
to describe one specific qualitative variable according to quantitative and/or
qualitative variables.
3. Constructing confidence ellipses around categories:
Confidence ellipses can also be drawn around the categories of a categorical
variable (i.e. around the barycentre of the individuals carrying that category).
These ellipses allow one to visualise whether two categories are significantly
different or not. It is possible to construct confidence ellipses for all the
categories for a number of categorical variables using the plotellipses function:
> plotellipses(res.mca,keepvar=c("Loan","Unpaid","Insurance",
"Profession"))
11.1.2 Example
Let us again examine the dataset of the athletes participating in the decathlon
(see Worked Example 10.1) and construct an ascending hierarchical cluster-
ing of the athletes performance profiles. We are only interested in the ten
performance variables.
11.1.3 Steps
1. Read the data.
2. Standardise the data if necessary.
> library(cluster)
> res.ahc <- agnes(scale(decath[,1:10]), method= "ward")
> plot(res.ahc, which.plots=2, main="Dendrogram",
xlab="Individual")
10
8
6
Height
Casarsa
4
Karpov
MARTINEAU
Smith
BOURGUIGNON
Turi
Terek
Korkizoglou
2
CLAY
KARPOV
Macey
Parkhomenko
HERNU
Warners
Schoenbeck
Pogorelov
SEBRLE
Uldal
Drews
Sebrle
Clay
Barras
Gomez
Hernu
ZSIVOCZKY
Karlivans
BARRAS
Lorenzo
NOOL
Averyanov
Ojaniemi
Zsivoczky
YURKOV
Bernard
McMULLEN
Smirnov
Qi
Schwarzl
WARNERS
Nool
BERNARD
0
Individual
Agglomerative Coefficient = 0.76
Figure 11.1
Hierarchical tree.
The height of the fortieth and final bar (Figure 11.2) gives an idea of how
difficult it would be to bring together the first individuals and form a clustering
of forty clusters. The height of the thirty-ninth bar also gives an idea of the
difficulty in changing from this forty-cluster clustering to a thirty-nine-cluster
clustering. This change is obtained either by merging two individuals, or by
aggregating an individual to an existing cluster. In the same way, the height
of the first bar gives an idea of the difficulty in aggregating the two remaining
clusters to group the n = 41 individuals together.
As we are looking for a small number of homogeneous clusters, we are inter-
ested in the heights of the first bars. The most marked jump separates the
fifth and sixth bars: it is therefore difficult to go from six to five clusters.
One possibility for the cut would therefore be to take six groups. Between
the fourth and the third bar there is another less pronounced jump. In order
to have enough individuals in each cluster, we chose to use this jump which
defines four clusters.
We now need to find out which individuals are in each of these four clusters.
244 R for Statistics
10
8
Height
6
4
2
0 10 20 30 40
Index
Figure 11.2
Barplot of the heights as a help to choose the number of clusters.
To do this, we use the cutree function and indicate the cut, that is, the number
of clusters into which the individuals are divided (for example, k=4).
> library(FactoMineR)
> clusters.hac <- as.factor(clusters.hac)
> decath.comp <- cbind.data.frame(decath,clusters.hac)
> colnames(decath.comp)[14] <- "Cluster"
> catdes(decath.comp, num.var = 14)
$quanti
$quanti$2
Mean in Overall sd in Overall
v.test category mean category sd
100m -2.265855 10.89789 10.99805 0.1701572 0.259796
110m.H -2.397231 14.41579 14.60585 0.3097931 0.466000
400m -2.579590 49.11632 49.61634 0.5562394 1.139297
1500m -2.975997 273.18684 279.02488 5.6838942 11.530012
$quanti$3
Mean in Overall sd in Overall
v.test category mean category sd
1500m 3.899044 290.76364 279.02488 12.627465 11.530012
400m 2.753420 50.43546 49.61634 1.272588 1.139297
Long. -2.038672 7.09364 7.26000 0.283622 0.312519
The catdes function sorts the quantitative variables from the most to the least
characteristic as positives (variables for which the individuals of the cluster
carry values which are significantly higher than the mean for all the individu-
als), then from the least to the most characteristic as negatives (variables for
which the individuals of the cluster carry values which are significantly lower
than the mean for all the individuals). The variables are sorted according to
the v-tests defined as
xq x
v-test = r
s2 IIq
Iq I1
sorted. These categories are sorted from most to least characteristic when the
category is over-represented in the cluster (compared to other clusters) and
from least to most characteristic when the category is under-represented in
the cluster. In the example, none of the qualitative variables characterise the
clusters 2 and 3.
> library(FactoMineR)
> res.pca <- PCA(decath[,1:10], graph=FALSE, ncp=8)
> res.ahc <- agnes(res.pca$ind$coord, method="ward")
248 R for Statistics
4. Consolidating a clustering:
It can be interesting to consolidate the clustering with a partitioning method
such as k-means (see Worked Example 11.2), specifying the number of clusters
to be constructed. In order to do this, we initialise the centres of the clusters
with the barycentres of the constructed clusters using ascending hierarchical
clustering. We thus calculate the means for each of the variables (centred and
standardised as the AHC is conducted on centred and standardised data) for
each cluster using the aggregate function:
The kmeans function takes as its argument the dataset and the number of
clusters or the centres of the clusters.
clustering and partitioning, to better describe and visualise the data. The
HCPC function performs a hierarchical clustering (with the Euclidean dis-
tance and the Wards agglomerative criterion) on the principal components
obtained with principal component methods (PCA for quantitative variables
and MCA for qualitative ones). The first step consists of performing the prin-
cipal component method, here a PCA, and then applying the HCPC function
on the object res.pca:
If the argument ncp=Inf, it means that all the principal components are
preserved, which amounts to constructing the clustering from the (scaled)
raw data. When the argument ncp is less than the number of variables, some
principal components are removed which remains to denoise the data. The
consol=FALSE argument is used to specify that no consolidation (see item 4)
is done after pruning the tree. The HCPC function yields an interactive graph
(Figure 11.3) with the hierarchical tree from which it is possible to choose a
cut height by clicking on the tree. An optimal cut height is also suggested
by a horizontal line (here at a height of around 0.80). On the same device,
another graph is drawn illustrating increases in within-cluster variability from
one partition with k + 1 clusters and one partition with k clusters.
Remark that the graphs provided by the agnes function and the HCPC func-
tion are different. Indeed, compared to the graph provided by the agnes func-
tion, the individuals are sorted according to their similarity (i.e. according
to their coordinates on the first principal component). It avoids positioning
very different individuals side-by-side on the hierarchical tree if they belong
to very different clusters. Furthermore, the individuals and the clusters are
grouped together according to variance, whereas with the agnes function they
are grouped according the square root of the variance. Thus, the hierarchical
tree for the HCPC function tends to be flattened: it is thus more difficult
to distinguish between the clusters for the first groupings of individuals but
easier for the last groups of clusters (i.e. the top of the hierarchical tree) and
it is thus easier to choose the height of the cut.
After having clicked on the graph, the hierarchical tree is represented in three
dimensions on the first two principal components (Figure 11.4). Individuals
are coloured according to their cluster.
The clusters (obtained after the hierarchical tree has been pruned) are de-
scribed by the variables, the dimensions of the principal component method
and by the individuals using the following commands:
> res.hcpc$desc.var
> res.hcpc$desc.axes
> res.hcpc$desc.ind
250 R for Statistics
1.2
Hierarchical Clustering
0.8
0.4
0.0
Click to cut the tree
inertia gain
1.5
1.0
0.5
0.0
BOURGUIGNON
Uldal
Karlivans
BARRAS
HERNU
MARTINEAU
Lorenzo
NOOL
Casarsa
Turi
Pogorelov
SEBRLE
Parkhomenko
YURKOV
Zsivoczky
Korkizoglou
Terek
CLAY
KARPOV
BERNARD
Nool
Schoenbeck
Drews
Schwarzl
WARNERS
Warners
Averyanov
Ojaniemi
Smirnov
Qi
Gomez
Barras
ZSIVOCZKY
Hernu
Smith
McMULLEN
Bernard
Macey
Clay
Sebrle
Karpov
Figure 11.3
Hierarchical tree provided by HCPC.
We find here the same description of the clusters as the one given by the catdes
function. Moreover, the description by the individuals gives, in the object
res.hcpc$desc.ind$para, the paragon of each cluster (i.e. the individual
closest to the centre of each cluster) and its distance to the centre of the
cluster.
cluster 1
cluster 2
cluster 3
cluster 4
2.0
1.5
Casarsa 4
3
Korkizoglou
Parkhomenko YURKOV 2
height
1.0
Dim 2 (17.37%)
Uldal SEBRLE
Turi Barras CLAY KARPOV Karpov
BOURGUIGNON Terek McMULLEN 0
BARRAS Schoenbeck Bernard
Karlivans Qi Hernu -1
0.5
-4 -2 0 2 4 6
Dim 1 (32.72%)
Figure 11.4
Three-dimensional tree on the first two dimensions of the PCA provided by
HCPC.
252 R for Statistics
11.2.2 Example
Let us again examine the dataset of the athletes participating in the decathlon
(see Worked Example 10.1) and construct a partition of the athletes per-
formance profiles. We are therefore only interested in the ten performance
variables.
11.2.3 Steps
1. Read the data.
2. Standardise the variables if necessary.
> results.kmeans
$k$-means clustering with 4 clusters of sizes 11, 13, 5, 12
Cluster means:
X100m Long.jump Shot.put High.jump X400m ...
1 1.058138862 -0.8935708 -0.18280501 -0.27095958 1.1885477 ...
2 -0.419569671 0.4278885 -0.04683446 -0.29297229 -0.2955972 ...
3 -1.232016906 1.4285640 1.37419773 1.47464827 -0.9747632 ...
4 -0.002086435 -0.2396743 -0.35427380 -0.04867052 -0.3631204 ...
Clustering vector:
Sebrle Clay Karpov Macey Warners ...
3 3 3 3 2 ...
In output, for each cluster and each variable, the function yields the means
of the individuals of the cluster, a vector featuring the cluster number of each
individual, and the within-cluster variability.
254 R for Statistics
> library(FactoMineR)
> decath.comp <- cbind.data.frame(decath,
factor(results.kmeans$cluster))
> colnames(decath.comp)[14] <- "Cluster"
> catdes(decath.comp, num.var = 14)
$quanti
$quanti$2
Mean in Overall sd in Overall
v.test category mean category sd
Pole.vault 4.452686 5.046154 4.76244 0.176354 0.274589
$quanti$4
Mean in Overall sd in Overall
v.test category mean category sd
1500m -3.247660 269.82083 279.02488 5.941016 11.530012
Pole.vault -3.320412 4.538333 4.76244 0.158000 0.274589
The catdes function sorts the quantitative variables from the most to the least
characteristic as positives (variables for which the individuals of the cluster
carry values which are significantly higher than the mean for all the individu-
als), then from the least to the most characteristic as negatives (variables for
which the individuals of the cluster carry values which are significantly lower
than the mean for all the individuals). The variables are sorted according to
the v-tests defined as
xq x
v-test = r
s2 IIq
Iq I1
a positive sign (or negative, respectively) for the v-test indicates that the mean
of the cluster is superior (or inferior, respectively) to the general mean.
In the example, the results are given for the description of clusters 2 and 4.
The individuals in cluster 2 jump high in the pole vault. For the individuals
in this cluster, the average height is 5.05 m compared with 4.76 m for all the
individuals (including those from cluster 2). The individuals in cluster 4 are
characterised by the fact that they do not jump as high as the others in the
pole vault (v-test less than 2) and that they run the 1500 m faster (in a
shorter time) than the others.
For the qualitative variables, it is the categories of the variables which are
sorted. These categories are sorted from most to least characteristic when
the category is over-represented in the cluster (compared to other clusters)
and least to most characteristic when the category is under-represented in
the cluster. In the example, none of the qualitative variables characterise the
clusters 2 and 4.
Function Description
print Writes the results (either all results or an extract)
plot Constructs a graph
summary Summarises the results of model fitting functions
There are many functions which begin with print, plot or summary, for
example, print.lm, print.PCA, print.rpart, etc. They can all be called using
the generic instruction print rather than by print.lm, print.PCA, print.rpart,
etc. However, to obtain help with a function which writes an rpart object
for example, we must write help("print.rpart").
Function Description
abs(x) Absolute value
sqrt(x) Square root
ceiling(x) Gives the smallest following integer: ceil-
ing(5.24)= 6, ceiling(5)= 5, ceiling(-5.24)=
5
258 R for Statistics
Function Description
floor(x) Gives the largest previous integer: floor(5.24)=
5, floor(5)= 5, floor(-5.24)= 5
trunc(x) Truncates the value of x to 0 decimal digits:
trunc(-5.24)= 5
round(x, Rounds to n decimal digits: round(5.236,
digits=n) digits=2)= 5.24
signif(x, Rounds to n total digits: signif(5.236,
digits=n) digits=2)= 5.2
cos(x), sin(x), Trigonometric functions
tan(x), acos(x),
cosh(x), acosh(x),
etc.
log(x) Natural logarithm
log10(x) Base 10 logarithm
exp(x) Exponential
Function Description
c Concatenates within a vector
cbind Concatenates tables one next to the other (jux-
taposition in columns), see Section 2.5
cbind.data.frame Juxtaposes data-frames into columns, see Sec-
tion 2.5
rbind Juxtaposes tables in rows (Caution! This func-
tion puts one row on top of another without tak-
ing column names into account), see Section 2.5
rbind.data.frame Juxtaposes data-frames in rows; the column
names of the data-frames must be the same (the
columns are sorted in the same order for all the
tables in order to link the variables prior to con-
catenation), see Section 2.5
merge Merges the tables according to a key, see Sec-
tion 2.5
sort Sorts vectors in ascending order (or descending
order if decreasing = TRUE)
order Sorts a table according to one or more columns
(or rows): x[order(x[,3], -x[,6]), ] ranks
the table x depending on (ascending) the third
columns of x then, in the case of equality in the
third column of x, depending on (descending) the
sixth column of x
by(data, Applies the FUN function to each level of the vec-
INDICES, FUN) tor INDICES in the data table
Appendix 259
Function Description
dimnames Yields the names of the dimensions of an object
(list, matrix, data-frame, etc.)
rownames Yields the row names of a matrix
row.names Yields the row names of a data-frame
colnames Yields the column names of a matrix
col.names Yields the column names of a data-frame
names Yields the names of an object (list, matrix, data-
frame, etc.)
dim Yields the dimensions of an object
nrow or NROW Yields the number of rows in a table (in capitals,
yields a response even if the object is a vector)
ncol or NCOL Yields the number of columns in a table (in capi-
tals, yields a response even if the object is a vec-
tor)
factor Defines a vector as a factor (if ordered=TRUE the
levels of the factors are taken to be ordinal)
levels Yields the levels of a factor
nlevels Yields the number of levels of a factor
as.data.frame(x) Converts x into a data-frame
as.matrix(x) Converts x into a matrix
as.list(x) Converts x into a list
as.vector(x) Converts x into a vector
is.data.frame(x) Tests whether x is a data-frame
is.matrix(x) Tests whether x is a matrix
is.vector(x) Tests whether x is a vector
is.list(x) Tests whether x is a list
class(x) Yields the class of object x (matrix, data-frame,
list, etc.)
mode(x) Yields the mode of the object x (numeric, logic,
etc.)
as.character(x) Converts x into a character
as.numeric(x) Converts x into a numeric
as.integer(x) Converts x into an integer
as.logical(x) Converts x into a Boolean
is.character(x) Tests whether x is a chain of characters
is.numeric(x) Tests whether x is a numeric
is.integer(x) Tests whether x is an integer
is.logical(x) Tests whether x is a Boolean
260 R for Statistics
Function Description
which Yields the positions of the true values of
a vector or a logic table: the param-
eter arr.ind=TRUE yields the numbers of
the rows and columns in the table (Sec-
tion 2.4.1, p. 41): which(c(1,4,3,2,5,3) ==
3) yields 3 6; which(matrix(1:12,nrow=4)
==3,arr.ind=TRUE) yields (row 3, column 1)
which.min Yields the index of the minimum of a vector
which.max Yields the index of the maximum of a vector
is.na Tests whether the piece of data is missing
is.null Tests whether the piece of data is null
length Yields the length of a list or a vector
any Tests whether at least one value of a logic vector
is true: any(is.na(x)) yields TRUE if at least one
piece of data is missing in x
split(x,fac) Divides the table x according to the levels of fac
Function Description
pnorm(q) Yields the probability P(X q) for X N (0, 1):
pnorm(1.96) = 0.975. To calculate P(X > q) we
use the argument lower.tail = FALSE
qnorm(p) Yields the quantile of order p of the N (0, 1) dis-
tribution: qnorm(0.975) = 1.96
pbinom(q, size, Yields the probability P(X q) for binomial dis-
prob) tribution B(size,prob): pbinom(5, 10, .5) =
0.623
ppois(q, lamda) Yields the probability P(X q) for X following
a Poisson distribution with parameter lambda
punif(q, min, Yields the probability P(X q) for X according
max) to a uniform distribution on [min, max]
Appendix 261
Function Description
pchisq(q, df) Yields the probability P(X q) when X follows
a chi-square distribution with df degrees of free-
dom
pt(q, df) Yields the probability P(X q) when X follows
a Students t distribution with df degrees of free-
dom
pf(q, df1, df2) Yields the probability P(X q) when X follows
a Fishers distribution with df1 and df2 degrees
of freedom
sample(x, size, This function is used to select a sample of size el-
replace = FALSE) ements of the vector x without replacement (with
replacement if replace=TRUE)
set.seed(n) Is used to choose a seed for the random number
generator; n must be an integer
Function Description
mean(x, Mean of x calculated from the present data
na.rm=TRUE)
sd(x) Standard deviation of x
var(x) Variance of x if x is a vector, or variance-
covariance matrix if x is a matrix
cor(x) Matrix of the correlations of x
median(x) Median of x
quantile(x, Quantiles of x for the given probabilities probs
probs)
range(x) Range of x
sum(x) Sum of the elements of x
min(x) Minimum of x
max(x) Maximum of x
sign(x) Yields the sign of x (positive or negative)
scale(x, Centres (center=TRUE) and standardises
center=TRUE, (scale=TRUE) x
scale=TRUE)
colMeans(x) Calculates the mean for each column in table x
rowMeans(x) Calculates the mean for each row in table x
262 R for Statistics
Function Description
apply(x,MARGIN, Applies the function FUN to the rows or columns
FUN) in the table x: apply(x, 2, mean) calculates the
mean of each column of x; apply(x, 1, sum) cal-
culates the sum for each row of x
aggregate(x,by, Applies the function FUN to x according to the
FUN) list of factors offered in by: aggregate(x, by
=list(vec),mean) calculates the means of x for
each level of vec
Function Description
t.test Constructs a confidence interval for a mean, tests
the equality of a mean to a given value or con-
structs a test to compare the means of two sub-
sets; t.test(x) constructs a confidence interval
for the mean of x and tests the equality of the
mean of x to the value mu (by default mu=0);
t.test(x~fac) constructs the test of equality of
means in two given sub-populations defined by
the factor fac (which must have two categories).
By default, the test is constructed with unequal
variances var.equal = FALSE; the test is by de-
fault bilateral. We can construct a unilateral test
using alternative = "less" or alternative =
"greater"; see Worked Examples 6.1 and 6.3
var.test Constructs a test of comparison of variances; see
Worked Example 6.3
chisq.test Constructs a chi-square test; see Worked Exam-
ple 6.2
prop.test Constructs a test of equality of proportions; see
Worked Examples 6.4 and 6.5
lm(formula) Constructs a linear model, that is, a multiple re-
gression, an analysis of variance or covariance de-
pending on the nature of the explanatory vari-
ables; see Worked Examples 7.1, 7.2, 8.1, 8.2,
and 8.3
anova Yields the analysis of variance table
aov(formula) Constructs the analysis of variance model defined
by formula; if formula =y~x1 + x2 + x1:x2,
constructs a model with the main effects x1 and
x2 and with the interaction of x1 with x2; see
Worked Examples 8.1 and 8.2
Appendix 263
Function Description
glm Constructs a generalised linear model; see
Worked Example 9.2
dist(x) Constructs a matrix of distances between the
rows of the matrix x
PCA Function of the FactoMineR package which con-
structs a principal component analysis with the
possibility of adding supplementary individuals
and quantitative and/or qualitative variables; see
Worked Example 10.1
CA Function of the FactoMineR package which con-
structs a correspondence analysis; see Worked
Example 10.2
MCA Function of the FactoMineR package which con-
structs a multiple correspondence analysis with
the possibility of adding supplementary individu-
als and quantitative and/or qualitative variables;
see Worked Example 10.3
dimdesc Function of the FactoMineR package which de-
scribes the dimensions of principal component
methods
catdes Function of the FactoMineR package which de-
scribes a qualitative variable according to quan-
titative and/or qualitative variables; see Worked
Examples 10.3 and 11.1
agnes Function which constructs an ascending hierar-
chical classification; see Worked Example 10.2
kmeans(x, Constructs a K-means classification from the
centers) data table x; centers corresponds to the number
of classes or the centres of classes used to initiate
the algorithm; see Worked Example 11.2
HCPC Constructs a classification from the results of a
principal component methods; see Section 11.1.6
lda Function of the MASS package used to conduct
a linear discriminant analysis, see Worked Exam-
ple 9.1
rpart Function of the rpart package which constructs
a regression (or segmentation) tree, see Worked
Example 9.3
Function Description
barplot Yields a bar chart
hist Yields a histogram
264 R for Statistics
Function Description
boxplot Yields a boxplot; boxplot(y~fac) yields a graph
with a boxplot for each category of the factor fac
pie Yields a pie chart
points Plots points on a preexisting graph
lines Plots lines on a preexisting graph
curve Draws the curve for a function
abline(a,b) Draws the line with slope b and intercept a on a
preexisting graph
legend Adds a legend to a preexisting graph; leg-
end("topleft", legend = ... ) draws the
legend at the top left
scatterplot(y~x) Constructs a scatterplot y according to x; by
default, a regression line (reg.line=TRUE) and
a non-parametric adjustment curve (by default,
smooth=TRUE) are drawn. Boxplots are also pro-
vided for x and y (by default, boxplots="xy").
scatterplot(y~x|z) constructs a scatterplot for
each category of z
pairs Constructs scatterplots for each pair of variables
of a table
persp Draws graphs in perspective, or response surfaces
image Constructs three-dimensional response surfaces
locator Reads the position of the cursor on the graph
identify Searches for the individuals with the coordinates
closest to the position of the cursor
colors() Provides the list of the 657 colours defined by
default in R
graphics.off() Closes all the graphs which are open
X11() Creates a new empty graph window
pdf, postscript, Used to save a graph in pdf, postscript, jpeg,
jpeg, png, bmp png or bmp format; all the functions are used in
the same way: pdf("mygraph.pdf"); graph or-
der ; dev.off(); see Section 3.1.5
Function Description
read.table Reads a file in table format and creates a data-
frame
read.csv Reads a file with a csv extension containing a
table and creates a data-frame
scan Reads the data originating in a vector or list from
a console or file
write Writes the data in a file
Appendix 265
Function Description
write.table Writes a table in a file
save Saves R objects in a file .Rdata
load Loads objects saved using the save function
history Retrieves the most recent command lines
savehistory Saves the command lines in a .Rhistory file
loadhistory Reads the command lines saved in a .Rhistory
file
Function Description
substr(x, Extracts or replaces a sub-chain of characters:
start=n1, substr("abcdef", 2, 4) yields bcd
stop=n2)
grep(pattern, x, Gives the indices for the elements of list x for
fixed=FALSE) which the sub-chain pattern is present. If
fixed=FALSE then pattern is an expression, if
fixed=TRUE then pattern is a chain of charac-
ters
sub(pattern, Finds, within the chain x, the sub-chain pattern
replacement, x, and replaces it with replacement: sub("man","R
fixed=FALSE) useR","Hello man") returns Hello R useR.
Replacement only occurs once. See also gsub for
multiple replacement.
strsplit(x, split) Cuts a chain of characters into multiple sub-
chains according to the character split: str-
split("abedtedr","e") returns "ab" "dt" "dr"
paste(..., Concatenates multiple chains of characters by
sep="") separating them with sep
toupper(x) Writes x in uppercase characters
tolower(X) Writes X in lowercase letters
apropos(what) Yields objects containing the chain of characters
what in their name
Function Description
seq(from, to, Generates a sequence: seq(1,9,2) gives 1 3 5 7
by) 9
rep(x, ntimes) Repeats sequence x a given number of times:
rep(4:6,2) gives 4 5 6 4 5 6. It is also possible to
repeat each term of the object: rep(4:6,each=2)
gives 4 4 5 5 6 6
266 R for Statistics
Function Description
cut(x, n) Divides a continuous variable into a qualitative
variable with n levels
solve Inverts a matrix or resolves a linear system; see
Section 1.4.5
eigen Gives the eigenvalues and eigenvectors of a ma-
trix; see Section 1.4.5
svd Compute the singular-value decomposition of a
rectangular matrix
Y ~ x1+x2: model with the variable Y explained by the variables x1, x2;
equivalent to Y ~ 1+x1+x2
Y ~ -1+x1+x2: model with the variable Y explained by the variables x1,
x2 without the constant (-1 eliminates the constant)
Y ~ x1+x2 + x1:x2: model with the variable Y explained by the variables
x1, x2 and the interaction between x1 and x2
Y ~ x1*x2: equivalent to the the previous model
Y ~ x1*x2*x3: is equivalent to the model with all the main effects and
the interactions between x1, x2 and x3, thus Y ~ x1+x2+x3 + x1:x2 +
x1:x3 + x2:x3 + x1:x2:x3
Y ~ (x1+x2):x3: is equivalent to the model Y ~ x1:x3+x2:x3
Y ~ x1*x2*x3 - x1:x2:x3: is equivalent to the model Y ~ x1+x2+x3 +
x1:x2 + x1:x3 + x2:x3
Y ~ I(x1^ 2): model with the variable Y explained by x12 ; I(.) protects
the expression x1^2, else x1^2 is interpreted as x1*x1 = x1+x1+x1:x1 =
x1
Y ~ I(x1+x2): model where the variable Y is explained by the constant
and the variable resulting from the sum (individual by individual) of the
variables x1 and x2; I(.) protects the expression x1+x2, otherwise x1+x2
is interpreted as two explanatory variables
Figure A.5
Main window of Rcmdr.
To change the active dataset, simply click on the menu Data. If we modify
the active dataset (for example by converting a variable), the dataset must be
refreshed using
Data Active data set Refresh active data set
The output window features the code lines in red and the results in blue.
The graphs are drawn in R. At the end of an Rcmdr session, the script window,
and therefore all the instructions, can be saved along with the output file that
is, all the results. R and Rmcdr can be closed simultaneously using File
Exit From Commander and R.
Remarks
Writing in the script window of Rcmdr or in the R window amounts to the
Appendix 269
Rcmdr windows may not open correctly, or may be hidden behind other
open windows. In this case, if using Windows, right-click on the R icon or
on the shortcut used to launch R, then click on Properties, and change
the target by adding --sdi after the file access path Rgui.exe, which,
for example, gives
The package only needs to be installed once (see Section 1.6, p. 24). The
package is then loaded using
> library(FactoMineR)
A website entirely dedicated to the FactoMineR package can be found at
270 R for Statistics
> source("http://factominer.free.fr/install-facto.r")
3. The position of the letter q is first calculated. Then the vector of letters
and the vector of indices are pasted:
> set.seed(007)
> vec1 <- runif(100,0,7)
> mean(vec1)
[1] 3.564676
> var(vec1)
[1] 3.94103
> mean(vec2)
[1] NA
272 R for Statistics
> mean(vec2,na.rm=T)
[1] 3.539239
> var(vec2)
[1] NA
> var(vec2,na.rm=T)
[1] 3.879225
4. We delete the missing values and find again the mean and variance previ-
ously calculated with the argument na.rm=TRUE:
5. If the missing values are replaced by the mean of the variable, then the
mean is the same as before but the variance is under-estimated:
6. The missing values are replaced by values drawn from a normal distribution
with mean and standard deviation of the variable:
7. The missing values are replaced by values drawn from a Uniform distribu-
tion from the minimum to the maximum of the observed values:
8. The missing values are replaced by values randomly drawn from the ob-
served values:
5. To calculate the determinant and invert the matrix, simply use the func-
tions det and solve:
> det(mat)
[1] 60
274 R for Statistics
> solve(mat)
row-1 row-2 row-3 row-4
column 1 0.5 -0.5 0.1666667 -5.551115e-17
column 2 -0.6 0.4 -0.4666667 5.000000e-01
column 3 0.7 -0.3 0.4333333 -5.000000e-01
column 4 -1.2 0.8 -0.2666667 5.000000e-01
Exercise 1.4 (Selecting and Sorting in a Data-Frame)
1. The iris data is loaded and a new dataset is created by selecting only the
rows carrying the value "versicolor" for the Species variable:
> data(iris)
> iris2 <- iris[iris[,"Species"]=="versicolor", ]
2. The dataset is sorted according to the first variable using the order func-
tion:
> iris2[order(iris2[,1],decreasing=TRUE),]
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
51 7.0 3.2 4.7 1.4 versicolor
53 6.9 3.1 4.9 1.5 versicolor
77 6.8 2.8 4.8 1.4 versicolor
...
61 5.0 2.0 3.5 1.0 versicolor
94 5.0 2.3 3.3 1.0 versicolor
58 4.9 2.4 3.3 1.0 versicolor
Exercise 1.5 (Using the apply Function)
1. To calculate the benchmark statistics, simply use the summary function:
> apply(X=ethanol,MARGIN=2,FUN=quantile)
NOx C E
0% 0.3700 7.500 0.53500
25% 0.9530 8.625 0.76175
50% 1.7545 12.000 0.93200
75% 3.0030 15.000 1.10975
100% 4.0280 18.000 1.23200
Appendix 275
3. The instruction for the previous question by default yields the quartiles.
Indeed, since we have not specified the argument probs for the quantile func-
tion, the argument used by default is: probs=seq(0,0.25,0.5,0.75,1) (see
the help section for the quantile function). To obtain deciles, we have to spec-
ify probs=seq(0,1,by=0.1) as the argument. The help section for the apply
function indicates the optional arguments via ...: optional arguments
to FUN. It is therefore possible to pass probs=seq(0,1,by=0.1) as an
argument to the FUN=quantile function:
> apply(ethanol,2,quantile,probs=seq(0,1,by=0.1))
NOx C E
0% 0.3700 7.5 0.5350
10% 0.6000 7.5 0.6496
20% 0.8030 7.5 0.7206
30% 1.0138 9.0 0.7977
40% 1.4146 9.0 0.8636
50% 1.7545 12.0 0.9320
60% 2.0994 12.6 1.0104
70% 2.7232 15.0 1.0709
80% 3.3326 15.0 1.1404
90% 3.6329 18.0 1.1920
100% 4.0280 18.0 1.2320
2. Because there is only one row which do not contain 0, we have to use
drop=FALSE such that the output is a matrix and not a vector:
T.categ age
hs :2465 Min. : 0.00
blood : 94 1st Qu.:30.00
hsid : 72 Median :37.00
other : 70 Mean :37.41
id : 48 3rd Qu.:43.00
haem : 46 Max. :82.00
(Other): 48
2. The function is.numeric returns a boolean: TRUE when the object on which
it is applied is numeric. We have to apply this function to each column of the
data-frame Aids2 and take the negation (operator !). As the data-frame is a
list, applying a function to each column is (usually) equivalent to applying a
function to each component of the list; this is the scope of the lapply function:
> lapply(Aids2.qual,levels)
3. The summary indicates that the levels are still the same, M and F, but no
individuals take the category F.
> summary(res)
state sex diag death status
NSW :1726 F: 0 Min. : 8302 Min. : 8469 A: 947
Other: 0 M:2518 1st Qu.:10155 1st Qu.:10671 D:1571
QLD : 217 Median :10662 Median :11220
VIC : 575 Mean :10583 Mean :10987
3rd Qu.:11104 3rd Qu.:11504
Max. :11503 Max. :11504
T.categ age
hs :2260 Min. : 0.00
hsid : 68 1st Qu.:30.00
other : 55 Median :37.00
blood : 54 Mean :37.36
haem : 40 3rd Qu.:43.00
id : 21 Max. :82.00
(Other): 20
> attributes(res[,"sex"])
$levels
[1] "F" "M"
$class
[1] "factor"
5. We transform the sex variable into a character object and print the at-
tributes of the resulting object:
$class
[1] "factor"
278 R for Statistics
T.categ age
hs :2260 Min. : 0.00
hsid : 68 1st Qu.:30.00
other : 55 Median :37.00
blood : 54 Mean :37.36
haem : 40 3rd Qu.:43.00
id : 21 Max. :82.00
(Other): 20
The first three variables are factors and must therefore be converted to nu-
merics (Section 2.3.1, p. 35):
> summary(test3)
CLONE B IN HT19
1-105 : 18 Min. :1.000 Min. :1 Min. : 0.89
1-41 : 18 1st Qu.:1.000 1st Qu.:3 1st Qu.: 7.35
18-428 : 18 Median :1.000 Median :5 Median : 9.32
18-429 : 18 Mean :1.234 Mean :5 Mean : 10.47
18-430 : 18 3rd Qu.:1.000 3rd Qu.:7 3rd Qu.: 10.50
18-438 : 18 Max. :2.000 Max. :9 Max. :1142.00
(Other):891 NAs : 15.00
C19 HT29
Min. : 3.00 Min. : 1.96
1st Qu.:17.00 1st Qu.:10.75
Median :23.00 Median :12.88
Mean :22.08 Mean :12.20
3rd Qu.:28.00 3rd Qu.:14.00
Max. :37.00 Max. :17.50
NAs :20.00 NAs :15.00
2. Use the format Date for the last variable. The format POSIXct could be
interesting if the time was specified.
> ski2<-read.table("test4.csv",sep="|",skip=2,header=TRUE,
row.names=1,colClasses=c("character","numeric",
"factor","Date"))
> summary(ski2)
age gender first.time.skiing
Min. :24.00 0:6 Min. :1980-05-01
1st Qu.:28.00 1:2 1st Qu.:1989-07-20
Median :32.00 Median :2004-03-06
Mean :30.38 Mean :1998-06-01
3rd Qu.:33.00 3rd Qu.:2006-12-02
Max. :33.00 Max. :2009-03-06
282 R for Statistics
Exercise 2.4 (Reading Data from File and Merging)
1. Read the datasets:
2. Merge by key (common variable region for state1 and state3, then com-
mon variable state for state2 and the previous table):
2. Retain two columns for each table and create the data-frame:
3. As long as there is a category with a sample size of less than 5%, it must
be ventilated (and deleted from the list of levels): this is the object of the
loop while.
> while(any((tabl/sum(tabl))<p)) {
+ ## take the first category whose numbers are too low
+ j <- which(((tabl/sum(tabl))<p))[1]
+ K <- length(lev) # effective of the categories updated
+ ## merge the next or the previous category for the last one
Appendix 285
+ if (j<K) {
+ if ((j>1)&(j<K-1)) {
+ levels(Xfac) <- c(lev[1:(j-1)],paste(lev[j],
+ lev[j+1],sep="."),paste(lev[j],lev[j+1],sep="."),
+ lev[j+2:K]) }
+ if (j==1) {
+ levels(Xfac) <- c(paste(lev[j],lev[j+1],sep="."),
+ paste(lev[j],lev[j+1],sep="."),lev[j+2:K]) }
+ if (j==(K-1)) {
+ levels(Xfac) <- c(lev[1:(j-1)],paste(lev[j],
+ lev[j+1],sep="."),paste(lev[j],lev[j+1],sep=".")) }
+ } else {
+ levels(Xfac) <- c(lev[1:(j-2)],paste(lev[j-1],
+ lev[j],sep="."),paste(lev[j-1],lev[j],sep="."))
+ }
+ tabl <- table(Xfac) ## table updated and ...
+ lev <- levels(Xfac) ## ... categories updated
+ }
> tabl
Xfac
0-10.11-20.21-30 31-40 41-50.51-60.61-70
9 20 5
71-80 + 80
31 20
Exercise 2.8 (Cross-Tabulation Data Table)
1. A contingency table can simply be constructed by creating the following
matrix:
2. The tabl matrix is not of table type. There are missing attributes. Also,
the opposite simple operation (using as.data.frame) is not directly possible.
One solution is to add attributes to this matrix. Instead, more simply, we
construct this table by hand:
5. On the first two rows, we operate the loop on all the rows of the tabframe
table and control it using the sample size (not null). In the new table, we
then repeat the allocation of categories, for as many times as required by the
sample size (and therefore not the last column of tabmat which contains these
sample sizes). The row index for this table is managed by the iter counter.
2. To add the title, we use the title function (or we could also directly use
the main argument within the plot function):
0.0
0.5
1.0
0 1 2 3 4 5 6
Figure 3.6
Plot of the sine function.
> plot(dnorm,-4,4)
> abline(h=0)
> segments(0,0,0,dnorm(0),lty=2)
2. To draw new curves we use the curve function with the argument add=TRUE.
To differenciate between curves, use a different colour for each distribution.
288 R for Statistics
> curve(dt(x,5),add=TRUE,col=2)
> curve(dt(x,30),add=TRUE,col=3)
3. Simply use the legend function and position it at the top left:
> legend("topleft",legend=c("normal","Student(5)","Student(30)"),
col=1:3,lty=1)
> set.seed(123)
> X <- rbinom(1000, size=1, prob=0.6)
> set.seed(123)
> p <- 0.5
> N <- 10
Appendix 289
3. Before drawing the curve for the standard normal distribution, create a
grid for x varying between 4 and 4. Then, divide the graph window into one
row and three columns, plot a histogram and overlay the curve for the normal
distribution.
> spot<-read.table("sunspots.csv",sep=",",header=T)
> summary(spot)
nb_spot year month day
Min. : 0.00 Min. :1749 Min. : 1.00 Min. :1
1st Qu.: 15.70 1st Qu.:1807 1st Qu.: 3.75 1st Qu.:1
Median : 42.00 Median :1866 Median : 6.50 Median :1
Mean : 51.27 Mean :1866 Mean : 6.50 Mean :1
3rd Qu.: 74.92 3rd Qu.:1925 3rd Qu.: 9.25 3rd Qu.:1
Max. :253.80 Max. :1983 Max. :12.00 Max. :1
3. Check that the colours mentioned do indeed feature in the colour palette.
> color<-c("yellow","magenta","orange","cyan","grey","red",
"green","blue")
> all(color%in%colors())
[1] TRUE
290 R for Statistics
4. To draw the chronological series of Figure 3.34, we first construct the graph
without a curve and without points (argument type="n"). This makes it
possible to define the variation ranges of x and y along with the axes labels.
We thus draw parts of the curve one by one, changing the colour for each
category of thirty:
> palette(color)
> coordx <- seq(along=spot[,1])
> plot(coordx,spot[,1],xlab="Time",ylab="Number of sunspots",
type="n")
> for (i in levels(thirty)) {
> select <- thirty==i
> lines(coordx[select],spot[select,1],col=i)
> }
> abline(h=0)
> par(mar=c(2.3,2,0.5,0.3))
> layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE))
> plot(1:10,10:1,pch=0)
> plot(rep(1,4),type="l")
> plot(c(2,3,-1,0),type="b")
Appendix 291
2. The argument widths of layout is used to define the width of each column:
> par(mar=c(2.3,2,0.5,0.3))
> layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE),widths=c(4,1))
> plot(1:10,10:1,pch=0)
> plot(rep(1,4),type="l")
> plot(c(2,3,-1,0),type="b")
Exercise 3.9 (Plotting Points and Rows)
1. Read data from a file and check reading:
2. Draw the graph using the xyplot function of the lattice package:
> library(lattice)
> xyplot(maxO3~T12,data=ozone)
3. Draw the graph and connect the points using the argument type=c("p","l")
(the graph looks disorganised as the data is not sorted):
> xyplot(maxO3~T12,data=ozone,type=c("p","l"))
4. Draw the graph using the argument type="a" which sorts the abscissa:
> xyplot(maxO3~T12,data=ozone,type="a")
The curve does not go through all the points, as the function smoothes the
curve before drawing it.
Exercise 3.10 (Panel)
1. Read the data and construct a data-frame (denoted data) with the first
three rows of maxO3 and T12:
3. Construct the ordinal factor, specifying the order of the levels using levels:
4. Add this ordinal factor to the prov data: here, a data-frame must be
created as some variables are quantitative and others qualitative.
5. Reproduce the graph in Figure 3.10 using xyplot and the appropriate panel
function:
6. We leave the reader to reproduce this graph using the factor function rather
than the ordered function in question 3.
> my.factorial(4)
[1] 24
> my.factorial(4.2)
[1] 24
Message davis :
In my.factorial(4.2) : 4.2 rounded to 4
> my.factorial(-3)
Erreur dans my.factorial(-3) : the integer must be positive
+ if (floor(n)!=n){
+ warning(paste(n,"rounded to",floor(n)))
+ n <- floor(n)
+ }
+ result <- 1
+ for (i in 1:n) result <- result*i
+ return(result)
+ }
2. We apply the function from the previous question to each column of the
table which is a factor:
2. We apply the function from the previous question to each column of the
table which is an ordinal factor:
Breiman L., Friedman J., Olshen R. and Stone C. (1984). Classification and
Regression Trees. Wadsworth & Brooks, Monterey, CA.
Clarke G.M. and Cooke D. (2004). A Basic Course in Statistics. Wiley, New
York, 5 ed.
Collett D. (2003). Modelling Binary Data. Chapman & Hall/CRC, Boca
Raton, FL, 2 ed.
295
296 R for Statistics
Moore D.S., McCabe G.P. and Craig B. (2007). Introduction to the Practice
of Statistics. Freeman, W. H. & Company, New Zealand, 6 ed.
Murrel P. (2005). R Graphics. Chapman & Hall/CRC, New Zealand.
Murtagh F. (2005). Correspondence Analysis and Data Coding with R and
Java. Chapman & Hall/CRC Press, London.
Sahai H. and Ageel M.I. (2000). The Analysis of Variance: Fixed, Random
and Mixed Models. Birkhauser Publisher, Boston, MA.
Varmuza K. and Filmozer P. (2009). Introduction to Multivariate Statistical
Analysis in Chemometrics. CRC Press, Boca Raton, FL.
Vinzi V., Chin W., Henseler J. and Wang H.e. (2010). Handbook of Partial
Least Squares. Springer, Boston, MA.
Welch B.L. (1951). On the comparison of several mean values: an alternative
approach. Biometrika, 38, 330336.
Statistics
R for Statistics
Although there are currently a wide variety of software packages suitable for
the modern statistician, R has the triple advantage of being comprehensive,
widespread, and free. Published in 2008, the second edition of Statistiques
avec R enjoyed great success as an R guidebook in the French-speaking world.
Translated and updated, R for Statistics includes a number of expanded and
additional worked examples.
Organized into two sections, the book focuses first on the R software, then on the
implementation of traditional statistical methods with R.
The second section of the book presents R methods for a wide range of traditional
statistical data processing techniques, including:
After a short presentation of the method, the book explicitly details the R
command lines and gives commented results. Accessible to novices and
experts alike, R for Statistics is a clear and enjoyable resource for any
scientist.
Datasets and all the results described in this book are available on the books webpage at
http://www.agrocampus-ouest.fr/math/RforStat
K13834