0% found this document useful (0 votes)
91 views325 pages

Mat Jizat

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views325 pages

Mat Jizat

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 325

Investigating ICT-literacy assessment tools: Developing and validating

a new assessment instrument for trainee teachers in Malaysia

A thesis submitted in fulfilment of the requirements for the degree of Doctor of


Philosophy

Jessnor Elmy Mat Jizat


MSc. IMS (Monash)
BSc. (Hons) Computer (UTM)

School of Business Information Technology and Logistics


College of Business
RMIT University
August 2012
Declaration

I certify that except where due acknowledgement has been made, the work is that of the
author alone; the work has not been submitted previously, in whole or in part, to qualify
for any other academic award; the content of the thesis is the result of work which has
been carried out since the official commencement date of the approved research
programme; any editorial work, paid or unpaid, carried out by a third party is
acknowledged; and, ethics procedures and guidelines have been followed.

……………………….
Jessnor Elmy Mat Jizat
31 August 2012

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page i
Publications

Mat-jizat, JE 2009, 'E-learning initiatives in Malaysia', in E McKay (ed.), The E-learning


Toolkit, Ark Group, North Sydney.

Mat-jizat, JE & McKay, E 2009, 'Exploring trainee teachers Information and Communications
Technology (ICT) literacy levels : implementation of a smart school model', paper
presented to IADIS Multiconference on Computer Science and Information Systems,
Algarve, Portugal, 21-23 June

Mat-jizat, JE & McKay, E 2010, 'Developing an Instrument of Assessment for ICT-literacy for
Trainee Teachers: The Preliminary Findings', paper presented to IADIS
Multiconference on Computer Science and Information Systems, Freiburg, Germany,
29-31 July

Mat-jizat, JE & McKay, E 2011a, 'Developing an Instrument of Assessment for ICT-literacy for
Trainee Teachers: Preliminary Findings', International Journal of Computer
Information Systems and Industrial Management Applications, vol. 3, pp. 552-9.

Mat-jizat, JE & McKay, E 2011b, 'Validating an ICT-literacy Assessment Tool for Trainee
Teachers: Preliminary Findings', paper presented to Global Learn Asia Pacific 2011,
Melbourne, Australia, 28 March-1 April 2011, <http://www.editlib.org/p/37245>.

Investigating ICT-literacy assessment tool:


Page ii Developing and validating a new assessment instrument for trainee teachers in Malaysia
Acknowledgement

I would like to express my utmost gratitude and appreciation to the following people for their
support and encouragement during my preparation of this thesis:

My primary supervisor, Associate Professor Elspeth McKay. For her ability to motivate and for
her belief in my abilities. For providing invaluable feedback. For being honest and constructive,
and helping me complete the study.

My secondary supervisor, Dr Martin Dick. For his support throughout the thesis.

My sponsors, the Ministry of Higher Education Malaysia and the Sultan Idris Education
University. For the financial support and the opportunity given to me to pursue my doctorate
degree.

My father, Dr Mat Jizat and mother, Norimah. My siblings, Jessnor Hafiz, Jessnor Eina, Jessnor
Arif and Jessnor Ezrin. My sister-in-law, Azlina and my nephew, Shazwan Syahmi for their
love, prayers and encouragement during my academic journey. For the long distance phone calls
of reassurance and encouragement.

My friends, both in Australia and Malaysia. For being there when I needed someone to talk to
and keeping me sane.

The staff in the School of Business IT and Logistics, RMIT University and the Faculty of
Business and Economics, UPSI. For ongoing general support and for giving useful feedback.

To all those who were either directly or indirectly involved in this study in any way; it was all
appreciated.

My warmest thanks.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page iii
Table of Contents

Declaration ................................................................................................................................. i
Publications ............................................................................................................................... ii
Acknowledgement .................................................................................................................... iii
Table of Contents ..................................................................................................................... iv
List of tables ............................................................................................................................ vii
List of figures ......................................................................................................................... viii
List of acronyms ....................................................................................................................... ix
Abstract .................................................................................................................................... xi

Chapter 1: Introduction
1.1. Overview ....................................................................................................................... 1
1.2. Introduction ................................................................................................................... 2
1.3. Research Motivation ...................................................................................................... 4
1.4. Research Aim ................................................................................................................ 6
1.5. Research Objectives ...................................................................................................... 6
1.6. Research Questions ....................................................................................................... 7
1.7. Structure of the Thesis ................................................................................................... 7
1.8. Chapter-1 Summary....................................................................................................... 9

Chapter 2 : Conceptual Framework


2.1. Overview ..................................................................................................................... 11
2.2. Part-1: Existing Research and Standards for ICT-literacy ......................................... 12
2.3. Part-2: ICT-literacy Assessment and the Malaysian Smart School Project................ 13
2.3.1. ICT-literacy assessment for teachers ................................................................... 17
2.3.2. Why trainee teachers in Malaysia? ...................................................................... 18
2.3.3. Issues with ICT-literacy assessment .................................................................... 18
2.3.4. The need for a different instrument to assess trainee teachers’ ICT-literacy ....... 20
2.3.5. Cognitive and non-cognitive proficiencies in ICT-literacy ................................. 23
2.4. Part-3: Self-efficacy Versus Task-based Assessment.................................................. 27
2.4.1. Self-efficacy ........................................................................................................ 27
2.4.2. Task-based assessment ........................................................................................ 28
2.5. The Development Theory: Item Response Theory (IRT)............................................ 29
2.5.1. Why item response theory (IRT)? ....................................................................... 32
2.5.2. Issues with classical test theory (CTT) ................................................................ 35
2.6. The Conceptual Framework ........................................................................................ 36
2.7. Chapter-2 Summary..................................................................................................... 38

Chapter 3 : Review of the Literature


3.1. Overview ..................................................................................................................... 39
3.2. Part-1: Existing Research and Standards for ICT-literacy ......................................... 40
3.2.1. ICT-literacy and the knowledge society .............................................................. 41
3.2.2. ICT-literacy and schools ...................................................................................... 46
3.2.3. ICT-literacy standards.......................................................................................... 50
3.2.4. ICT-literacy and the learning theories ................................................................. 53
3.3. Part-2: ICT-literacy Assessment and the Malaysian Smart School ............................ 58
3.3.1. Assessing ICT-literacy ......................................................................................... 59
3.3.2. ICT-literacy in Malaysia ...................................................................................... 64
3.3.3. Commercialised ICT-literacy assessment tools ................................................... 66
3.4. Part-3: Task-based Assessment ................................................................................... 72
3.4.1. Task-based assessment issues .............................................................................. 72
3.4.2. Task-based test design ......................................................................................... 74
3.5. Chapter-3 Summary..................................................................................................... 75

Investigating ICT-literacy assessment tool:


Page iv Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter 4 : Design and Methodology
4.1 Overview ..................................................................................................................... 77
4.2 The Choice of Methods: Research Techniques ........................................................... 78
4.3 The Research Design ................................................................................................... 79
4.3.1 Phase-1: Preliminary review ............................................................................... 79
4.3.2 Phase-2: Expert judgement on ICT-literacy indicators........................................ 80
4.3.3 Phase-3: Pilot testing, validation and final instrument testing ............................ 80
4.4 Data Analysis Technique............................................................................................. 81
4.4.1 Qualitative data analysis ...................................................................................... 81
4.4.2 Quantitative data analysis .................................................................................... 82
4.5 Participants .................................................................................................................. 87
4.5.1 Qualitative study participants – PoE members .................................................... 87
4.5.2 Quantitative study participants – Malaysian trainee teachers ............................. 88
4.6 Data Collection ............................................................................................................ 88
4.6.1 Phase-1: Preliminary review ............................................................................... 89
4.6.2 Phase-2: Expert judgement on ICT-literacy indicators........................................ 89
4.6.3 Phase-3: Instrument validation and testing.......................................................... 93
4.7 Validity and Reliability ............................................................................................... 94
4.7.1 Validity ................................................................................................................ 94
4.7.2 Reliability ............................................................................................................ 97
4.8 Ethical Issues ............................................................................................................... 99
4.9 Chapter-4 Summary .................................................................................................. 100

Chapter 5 : Data Analysis and Findings - Phase-2 Expert Judgement on ICT Indicators
5.1 Overview ................................................................................................................... 101
5.2 PoE Members Data.................................................................................................... 102
5.2.1 Step-1: Selecting invited members for the PoE ................................................. 102
5.2.2 Step-2 to Step-5: Delphi-1 ................................................................................. 103
5.2.3 Delphi-1 conclusions ......................................................................................... 113
5.2.4 Step-6 to Step-9: Delphi-2 ................................................................................. 116
5.3 Chapter-5 Summary .................................................................................................. 120

Chapter 6 : Data Analysis and Findings - Phase-3 Instrument Validation and Testing
6.1 Overview ................................................................................................................... 121
6.2 Designing the TBA Instrument ................................................................................. 122
6.3 Instrument Terms and Terminologies ....................................................................... 124
6.4 Pilot Testing-1 ........................................................................................................... 125
6.4.1 Pilot testing-1: Preparation ................................................................................ 125
6.4.2 Pilot testing-1: Preamble ................................................................................... 126
6.4.3 Pilot testing-1: Observation ............................................................................... 126
6.4.4 Pilot testing-1: Outcome .................................................................................... 127
6.4.5 Pilot testing-1: Instrument review ..................................................................... 134
6.4.6 Pilot testing-1 (repeated): New TBA evaluation form ...................................... 136
6.5 Pilot Testing-2 ........................................................................................................... 143
6.5.1 Pilot testing-2: Round-1 .................................................................................... 145
6.5.2 Pilot testing-2: Round-2 .................................................................................... 152
6.6 Final Instrument Trial Process .................................................................................. 156
6.6.1 Final instrument trial: Preamble ........................................................................ 159
6.6.2 Final instrument trial process: Observation....................................................... 160
6.6.3 Final instrument trial process: Findings ............................................................ 160
6.7 Trainee Teacher’s ICT-literacy Data Diagnostic ..................................................... 161
6.8 Chapter-6 Summary .................................................................................................. 170

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page v
Chapter 7 : Discussion of the Results
7.1 Overview ................................................................................................................... 171
7.2 Answering the Research Questions ........................................................................... 172
7.2.1 What are the suitable indicators for trainee teachers’ ICT-literacy assessment? 172
7.2.2 How can the proposed TBA instrument evaluate the level of ICT-literacy?......174
7.3 Comparison with Existing Instruments ..................................................................... 175
7.3.1 Comparing the approach of the TBA instrument with existing instrument...... 175
7.3.2 Comparing the contents of the TBA instrument with existing instrument........ 177
7.4 Chapter-7 Summary................................................................................................... 178

Chapter 8 : Conclusions
8.1 Overview .................................................................................................................. 179
8.2 The Need for a New ICT-literacy Instrument........................................................... 180
8.3 Existing Research and ICT-literacy Standards ......................................................... 180
8.4 Expert Judgements on ICT-literacy Indicators ......................................................... 184
8.5 ICT-literacy TBA Instrument Validation and Testing.............................................. 186
8.6 The ICT-literacy TBA Instrument: Concluding Thoughts ....................................... 187
8.7 The ICT-literacy TBA Instrument: Points to Consider ............................................ 188
8.8 Limitations of the Study ........................................................................................... 189
8.9 Unexpected Findings ................................................................................................ 189
8.10 Suggestions for Future Research .............................................................................. 190
8.11 Chapter-8 Summary.................................................................................................. 190
Reference lists ....................................................................................................................... 191
Glossary of terms................................................................................................................... 205
Appendix A……………………………………………………………………………….…207
Appendix B............................................................................................................................ 212
Appendix C............................................................................................................................ 214
Appendix D ........................................................................................................................... 218
Appendix E ............................................................................................................................ 220
Appendix F ............................................................................................................................ 223
Appendix G ........................................................................................................................... 227
Appendix H ........................................................................................................................... 231
Appendix I(1) ........................................................................................................................ 233
Appendix I(2) ........................................................................................................................ 234
Appendix I(3) ........................................................................................................................ 235
Appendix I(4) ........................................................................................................................ 236

Investigating ICT-literacy assessment tool:


Page vi Developing and validating a new assessment instrument for trainee teachers in Malaysia
List of tables

Table 2.1. Structure of the original taxonomy of the cognitive domain ...................................... 24
Table 2.2. Anderson and Krathwohl’s revised taxonomy table .................................................. 25
Table 2.3. Rules of measurement ................................................................................................ 36
Table 3.1. Digital competence in a knowledge society ............................................................... 44
Table 3.2. Key themes in the 21st Century Skills’ report ........................................................... 45
Table 3.3. Approaches to ICT development in schools............................................................... 49
Table 3.4. Similarity of ICT components for ICT-literacy .......................................................... 53
Table 3.5. Four stages of ICT uptake in the ADL model ............................................................ 57
Table 3.6. Test instrument development matrix .......................................................................... 58
Table 3.7. Constructs and structures ........................................................................................... 60
Table 3.8. Barriers to teachers uptaking ICT .............................................................................. 62
Table 4.1. Quest output files ....................................................................................................... 85
Table 4.2. Example of a test instrument specification matrix ..................................................... 91
Table 4.3. Proposed validity test for educational and psychological measurement .................... 95
Table 4.4. Intended and unintended consequences of the ICT-literacy TBA instrument ............ 97
Table 5.1. List of identified ICT-literacy indicators .................................................................. 103
Table 5.2. List of reviewed ICT-literacy indicators................................................................... 104
Table 5.3. List of refined ICT-literacy indicators ...................................................................... 104
Table 5.4. List of ICT-literacy indicators and their activities .................................................... 105
Table 5.5. Mean score for relevance of indicators .................................................................... 107
Table 5.6. Test instrument specification matrix – draft TBA instrument.................................. 115
Table 5.7. Mean score for each task of the draft TBA instrument ............................................ 117
Table 6.1. Tasks and subtasks for draft TBA instrument .......................................................... 123
Table 6.2. The (PoE suggested) draft TBA instrument’s new arrangement.............................. 124
Table 6.3. Instrument terms and terminologies ......................................................................... 124
Table 6.4. List of test-items used in the draft TBA instrument ................................................. 127
Table 6.5. List of test-items that include partial credit format .................................................. 137
Table 6.6. List of finalised test-items included in the ICT-literacy TBA instrument ................ 156
Table 6.7. Total UPSI students by faculty/program/semester (for year 2010) .......................... 157
Table 6.8. Participant distribution by faculty/gender ................................................................ 159
Table 6.9. Test-item descriptor: ICT-literacy TBA instrument ................................................. 164
Table 6.10. Descriptors of Candidate-4’s unexpected incorrect test-items ............................... 166
Table 6.11. Descriptors of Candidate-4’s expected incorrect test-items ................................... 166
Table 6.12. Descriptors of Candidate-8’s expected incorrect test-items ................................... 168
Table 6.13. Descriptors of Candidate-8’s unexpected correct test-items................................... 168
Table 6.14. Descriptors of Candidate-40’s incorrect test-items ................................................ 170
Table 7.1. Example of commercially developed ICT-literacy tests........................................... 177

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page vii
List of figures

Figure 2.1. The Malaysian Smart School milestone (four waves) .............................................. 15
Figure 2.2. Bloom’s taxonomy and the Gagne five learned capabilities ..................................... 26
Figure 2.3. Item analysis theories ................................................................................................ 30
Figure 2.4. Item characteristic curve (ICC) ................................................................................. 31
Figure 2.5. Item location and discrimination estimates in the ICC ............................................. 35
Figure 2.6. Conceptual research framework ................................................................................ 37
Figure 3.1. Part-1 of the conceptual research framework ............................................................ 40
Figure 3.2. The nine information literacy standards by ALA & AECT ...................................... 52
Figure 3.3. Part-2 of the research conceptual framework............................................................ 58
Figure 3.4. The higher education ICT proficiency model ........................................................... 59
Figure 3.5. Relationships between confidence barrier and other barriers ................................... 62
Figure 3.6. Example of the iSkillsTM assessment scenario-based question ................................. 71
Figure 3.7. Part-3 of the research conceptual framework............................................................ 72
Figure 4.1. Research design......................................................................................................... 79
Figure 4.2. Item characteristic curve (ICC) ................................................................................. 84
Figure 4.3. Case (person) and test-item distribution on a single scale......................................... 86
Figure 4.4. Test-item fit map ....................................................................................................... 87
Figure 5.1. Phase-2 of the research design ................................................................................ 101
Figure 6.1. Phase-3 of the research design ................................................................................ 121
Figure 6.2. Test-item fit map ..................................................................................................... 129
Figure 6.3. Test-item fit map (after test-item 15 was deleted) ................................................... 130
Figure 6.4. Quest variable map.................................................................................................. 132
Figure 6.5. Summary of test-item estimates and fit statistics .................................................... 133
Figure 6.6. Example of a partial credit format ‘steps’ and scores ............................................. 136
Figure 6.7. Test-item fit map (re-tested) .................................................................................... 138
Figure 6.8. Test-item fit map (after test-item-8 was deleted)..................................................... 139
Figure 6.9. Test-item analysis results for observed responses (test-item-2) .............................. 139
Figure 6.10. Test-item analysis results for observed responses (test-item-11) .......................... 140
Figure 6.11. Summary of test-item estimates and fit statistics .................................................. 141
Figure 6.12. Quest variable map (re-testing pilot test-1) ........................................................... 142
Figure 6.13. Test-item fit map (Pilot test-2) .............................................................................. 146
Figure 6.14. Test-item analysis results for observed responses (test-item-4) ............................ 147
Figure 6.15. Test-item analysis results for observed responses (test-item-10) .......................... 148
Figure 6.16. Test-item analysis results for observed responses (test-item-11) .......................... 148
Figure 6.17. Test-item analysis results for observed responses (test-item-15) .......................... 149
Figure 6.18. Test-item analysis results for observed responses (test-item-16) .......................... 149
Figure 6.19. Quest variable map (Pilot test_2) .......................................................................... 150
Figure 6.20. Summary of test-item estimates and fit statistics .................................................. 151
Figure 6.21. Test-item fit map (Pilot test-2 round-2)................................................................. 152
Figure 6.22. Quest variable map (Pilot test-2 round-2) ............................................................. 153
Figure 6.23. Summary of test-item estimates and fit statistics (Pilot testing-2 round-2)........... 153
Figure 6.24. Summary of test-item estimates and fit statistics (instrument trial) ...................... 160
Figure 6.25. Kidmap – showing an individual’s performance .................................................. 162
Figure 6.26. Interpreting the Quest Kidmap .............................................................................. 163
Figure 6.27. Kidmap for Candidate-4 ........................................................................................ 165
Figure 6.28. Kidmap for Candidate-8 ........................................................................................ 167
Figure 6.29. Kidmap for Candidate-40 ...................................................................................... 169
Figure 7.1. Proposed ICT-literacy assessment framework for trainee teachers......................... 173
Figure 7.2. The higher education proficiency model................................................................. 173
Figure 8.1. Conceptual research framework .............................................................................. 181
Figure 8.2. Phases in the research design .................................................................................. 186

Investigating ICT-literacy assessment tool:


Page viii Developing and validating a new assessment instrument for trainee teachers in Malaysia
List of acronyms

Abbreviation Meaning
1PL One parameter logistic
ACRL Association of College and Research Libraries
ADL model Autonomy, dependence, and learning model
AECT Association for Educational Communications and Technology
ALA American Library Association
ANOVA Analysis of variance
ANZIIL Australian and New Zealand Institute for Information Literacy
ASCII American Standard Code for Information Interchange
BECTA British Educational Communications and Technology Agency
CA computer anxiety
CAD Computer aided design
CC Carbon copy
CPD continuous professional development
CSE Computer self-efficacy scale
CTL control keyboard key
CTP Certified training professional
CTT classical test theory
DCA Digital competence assessment
DER Digital education revolution
DEST Department of Education, Science and Training
ECDL/ICDL European/International Computer Driving Licence
EDUCTRA European Commission Concerted Action
ETS Educational testing service
EU European Union
EUT experience with the use of technology
HCI human-computer interaction
HIS health information systems
ICDL International Computer Driving Licence
ICT Information and Communications Technology
ICTIF Information and Communication Technology Innovation Fund
IE Internet Explorer
INFIT Infit mean-square
MNSQ IPSI Sultan Idris Teachers Institute
IRT item response theory
IS information systems
ISTE International Society for Technology in Education
IT information technology
IU intention to use
LI Literacy indicator
MCQ multiple choice questions
MSS Malaysian smart school
NAE National academy of engineering
NCLB No child left behind

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page ix
Abbreviation Meaning
NCREL North Central Regional Educational Laboratory
NGO Non-governmental organizations
NRC National research council
OSCE objective structured clinical examinations
PC personal computer
PCCT PC competency test
PCM Partial credit model
PCM partial credit model
PDA personal digital computers
PoE panel of experts
SD standard deviation
SSMS Smart school management system
TAC Teachers’ attitude toward computers
TAIT Train & Assess IT
TBA Task-based assessment
UPSI Sultan Idris Education University
UTM1013 Introduction to Information Technology & Communication
VCR Video cassette recorder

Investigating ICT-literacy assessment tool:


Page x Developing and validating a new assessment instrument for trainee teachers in Malaysia
Abstract

The central concern of this study is to develop an ICT-literacy task-based assessment instrument
that may be used to evaluate trainee teachers’ level of ICT-literacy. The current literature
acknowledges the need for a measurement instrument that evaluates ICT-literacy levels. This
type of measurement instrument is used as an entry-level testing tool for university and job
placements. However, existing ICT-literacy assessment instruments are either too expensive to
be implemented or too rigid with their expected answers; moreover, they are not tailored to a
teacher’s individual needs. The existing instruments either use self-efficacy techniques or step-
by-step task/instructions whereby they do not allow flexibility and creativity in completing the
task.

Rather, using a task-based assessment method allows the participants the freedom to complete
the task in any way they wish as long as the task requirement is fulfilled. Meaning that if the
task asks for an appropriate learning aid to be created which includes an image and a video, the
participant is free to use whatever computer applications they feel comfortable with to edit the
pictures, create videos and other digital learning aids. As long as the task requirement is
fulfilled, the task is considered complete. Task-based assessment also allows the participants to
show what they know, instead of just telling what they think they know. It is considered the best
method for this new ICT-literacy assessment instrument as it shows the participant’s actual ICT
ability.

This study was conducted in three phases: Phase-1 preliminary review; Phase-2 expert
judgement on ICT-literacy indicators; and Phase-3 instrument validation and testing. In
Phase-1, a review of the literature was conducted that involved drawing on the existing
literature on ICT-literacy standards; existing ICT-literacy assessment instruments and the
Malaysian Smart School (MSS) requirements. Twelve ICT-literacy indicators were identified in
this first research phase. In Phase-2, the identified ICT-literacy indicators were evaluated by a
specially chosen panel of experts (PoE). Two Delphi interactions were then conducted where the
first was to evaluate the ICT indicators, and the second was to validate the draft ICT-literacy
instrument. In Phase-3, the draft ICT-literacy instrument was validated and tested through two
pilot tests, and finally the instrument was tested on a larger number of participants for its final
instrument trial.

The validation and testing process showed that the ICT-literacy TBA instrument is valid and
reliable when tested on its intended participants, and the instrument is ready. The instrument
provides information with regard to each participant’s area of weakness in ICT.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page xi
This instrument can become an important tool for schools and teacher training institutions as it
identifies teachers/trainee teachers’ strengths and weaknesses in ICT knowledge and skills. This
knowledge may be used by the school/teacher training institutions to tailor their curricula to
support their ICT strength and weaknesses, to ensure that their teachers/trainee teachers possess
the necessary ICT knowledge and skills.

Investigating ICT-literacy assessment tool:


Page xii Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

1
Introduction

1.1. Overview

This thesis investigates an alternative computer skills assessment instrument to evaluate the
information and communications technology (ICT) literacy levels of trainee teachers in
Malaysia. This study employs a ‘task-based’ method as suggested by the International ICT-
literacy Panel (2002), instead of relying on the more common ‘pen-and-paper based’, self-
efficacy questionnaires that many researchers currently use. The study is based in Malaysia
within the context of the 2010 nationwide ‘Smart School’ project.

The organisation of this chapter is divided into the following sections:


• Introduction;
• Research motivation;
• Research aim and objectives;
• Research questions;
• Structure of the thesis; and
• Chapter-1 summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-1: Introduction

1.2. Introduction

Since the advent of the microprocessor, computers have become ubiquitous in the workplace
and society (McKay 2005). As a result, there have been observable changes in computer and
ICT tool usage in the workplace that involve the relationship between work, private and public
life (Bradley 2006). According to Weiser (1999), in many areas of our daily lives ICT has
become increasingly prevalent, for example, logging trip mileage in our cars, cooking meals in
microwave ovens, managing the temperature in refrigerators, and selecting the right brew in
coffee-making machines. Previously, it was thought that computers were used exclusively for
manipulating data; however, for the younger generation, particularly those who were born in the
1990s, ICT tools have become part of their social life.

In one of his most debatable articles, Prensky (2001a, p. 1) strongly suggests that the generations
of today are changing. Prensky states that these new generations ‘think and process information
fundamentally differently from their predecessors’. He refers to them as digital natives. Others
describe them as the net generation (Tapscott 1998) or generation-Y (Holley 2008). These new
generations are assumed to be techno-savvy, where they possess knowledge and skills of new
media that older generations have difficulty coping with. For these digital natives, giving them a
new ICT gadget is no problem because they will be able to work it in a matter of minutes. This
tendency is due to their ability to ‘assimilate’ technology, while for the older generations they
need to ‘accommodate’ new technology (Tapscott 1998). The newer generations were ‘born’
with the new technology. To them, ‘digital technology is no more intimidating than a VCR or a
toaster’ (Tapscott 1998, p. 1).

The implementation of ICT tools in an educational environment has been widely investigated. In
the literature there are many research studies that concentrate on aspects of ICT in education and
training. For instance, Albion (1996, 2001, 2003a, 2003b) conducted studies on the computer
use and self-efficacy beliefs of trainee teachers in using ICT for their teaching. The outcome of
one of his studies proved that trainee teachers do have a positive attitude regarding the usage of
computers in teaching and learning activities. However, lack of confidence in their own
knowledge will always be a hindrance. In another study, Albion suggested that self-efficacy
when using a computer will increase the more it is used. The more experience trainee teachers
have with using a computer, the more confident they will be in applying ICT tools to their
teaching and learning activities. Using examples and support from supervising teachers during
their practical experience in classrooms also plays an important role in increasing trainee
teachers’ computers skills in a classroom.

Investigating ICT-literacy assessment tool:


Page 2 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-1: Introduction

In Malaysia, Zainudin (2008) discovered similar situations with their trainee teachers. In his
study, Zainudin attempted to find out the ICT skills level of trainee teachers in Malaysian public
institutes of higher learning (PIHL), based on six aspects: knowledge; skill; interest; attitude;
self-efficacy/confidence; and accessibility. The study showed that the trainee teachers’ ICT
skills were varied among the 11 PIHL that participated. In general, the findings showed that the
majority of the trainee teachers were competent in ICT. More than 50% of the trainee teachers
understand the knowledge and skills needed to implement ICT in their teaching and learning
activities, though the majority of the trainee teachers do have a problem with the knowledge and
skills needed for computer programming and developing multimedia courseware. The trainee
teachers’ interest, attitude and self-efficacy/confidence were high, yet the problem lies with
accessibility to the facility. Some PIHL have difficulties in providing enough computers and
other related materials to their students (Zainudin 2008). Based on Albion (2003a), lack of
accessibility could affect the trainee teachers’ confidence to practise the use of ICT later in their
classroom. This notion also corresponds with Cuckle and Clarke’s (2002) study of trainee
teachers’ views, practices and access to ICT tools and how well they are mentored. Cuckle and
Clarke state that better access to equipment (ICT tools), as well as active support and
encouragement from supervising teachers, would increase trainee teachers’ utilisation of ICT
during their practical experience in classrooms.

In 1996 Christensen and Knezek developed and refined an instrument to measure teachers’
attitudes towards computers, known as the Teachers’ Attitude Toward Computers (TAC)
questionnaire (Christensen & Knezek 1996). This questionnaire was later used in Christensen’s
later study regarding the effects of technology integration education on the attitudes of teachers
and students (Christensen 2002). Training appears to have made a positive impact not only on
the teachers’ confidence to use computer in classrooms, but also helped the teachers suppress
their anxiety over their students’ more advanced ICT skill levels. Christensen’s study postulated
that by funding an ongoing technology integration education for teachers, it would help and
provide a positive support to them integrating technology into their teaching and learning
activities.

However, there are other areas that need further research and development. One of them is ICT-
literacy. Katz and Macklin (2007) argued that the problem faced by most tertiary level students
today is their inability to navigate, evaluate and use the plethora of online information now
available. ICT-literacy calls for computer-based abilities that are not restricted only to technical
abilities, but also critical abilities to select, interpret, and evaluate source materials of different
kinds and also cognitive and information processing abilities (Culp, Hawkins & Honey 1999).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 3
Chapter-1: Introduction

Punie and Cabrera (2005) argue that ICT-literacy describes not only basic computer literacy, but
rather a higher order skill such as: knowing where to search for certain information; how to
process and evaluate information; how to assess the reliability and trustworthiness of websites
and other online sources; and many others. This is what is lacking with the digital natives’ ICT
skills. Through the researchers’ observations, while the digital natives are very proficient in
using the Internet, many of them have difficulty when asked to perform a specific Internet
search or evaluate the credibility of Internet resources.

1.3. Research Motivation

It is said that the computer-based skills that the digital natives possess today influenced the
skills and interests in education in a very significant way (Bennett, Maton & Kervin 2008).
According to Prensky’s observation, many of today’s ‘tradition-bound educational systems’
seem to try to ignore their eyes, ears and intuition, and pretend that this issue does not exist
(Prensky 2001b). There is a substantial disparity between the technological skill and interests
that these digital native teacher trainees possessed, compared to the limited methods of
technology-based or blended teaching strategies available (Levin & Arafeh 2002; Prensky
2005). Therefore, to educate these digital native teacher trainees, this thesis proposes that
teachers require new pedagogical ICT skill development.

Instead of a one-size-fits-all curriculum for education, schools need to encourage individualised


learning. Instead of producing the-best-exam-based student, schools should encourage the
students to collaborate and set the stage for ‘lifelong learning’ (Tapscott 2009). In Malaysia, the
current educational systems are exam-based with centrally controlled, nationalised curriculum
from primary school to secondary school. This means that primary and secondary school
students in the whole country learn about the same topic, and the students are subjected to a
standardised exam where the outcome from this standardised exam will determine the students’
entry into colleges or universities. A report by the World Bank (2003) described these
drawbacks as ill-suited to providing people with appropriate skills and knowledge. The report
continues to argue rote-learning, exam-based schooling and the high cost of private education as
being a policy concern in some Asian countries for quite some time. On the other hand, in order
to participate effectively in 21st-century society, an individual needs to be better informed, have
greater thinking and problem-solving abilities, be more self-motivated, have a larger capacity for
cooperative interaction, possess more varied and more specialised skills, and to be more
resourceful and adaptable than ever before (Field 2006). In some countries these changing views
have prompted the abandonment of the ‘traditional view’ of education, where the schooling

Investigating ICT-literacy assessment tool:


Page 4 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-1: Introduction

years are the time in which the students would learn all the skills and knowledge that a
productive individual would require in a lifetime. Instead, it was replaced by a new view of
education where the students are actually being prepared with necessary skills and knowledge
during their schooling years, in order for them to effectively participate in 21st-century society.
Hence the Malaysian Smart School (MSS) concept proposal in 1997 was perceived as the
catalyst for changing the ‘traditional view’ of how Malaysian school systems operate.

In Malaysia in 1991, in its effort to become a fully developed nation, Tun Dr Mahathir
Mohamad, former prime minister of Malaysia (1981–2003), presented a working paper
outlining his 30-year vision of a fully developed Malaysia, known as ‘Vision 2020’. He
identified nine challenges that Malaysians need to overcome in order for the country to become
fully developed (see Mahathir 1991). One of the nine challenges is to become a knowledge-
based society. Creating an ICT literate society is the central platform in achieving that
transformation. The MSS project was regarded by the former prime minister as a specific
response to Malaysia’s need to make this critical transformation. As such, in July 1997 he
launched the MSS implementation plan, which aimed to achieve a unified and stabilised usage
of technology as the key enabler for teaching and learning by 2020 (see Chapter-2 section 2.3
for more detail).

Since then, ICT-literacy has been actively promoted in Malaysian schools by various agencies
of the Malaysian Ministry of Education. The Ministry has also made it compulsory for all
trainee teachers to be exposed to ICT tools, and by implication, the use of ICT-literacy in their
pedagogical strategies (Chan 2002b), which has curricula implications.

This thesis is proposing that ICT tools have the potential to change the role of
teachers. They will no longer be the source of knowledge and skills, but instead
teachers will work together with students to explore new knowledge and skills.

In spite of this, ICT tools must never be mistaken as the mechanism that we learn from, but
rather as the tool that we learn with. By having ICT tools such as computers, for example, they
should be regarded as a learning aid rather than a learning point.

Yet in looking at the current Malaysian school scenario, particularly the MSS, Chan (2002a)
identifies a serious gap between what is being understood by the teachers and what they actually
practise.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 5
Chapter-1: Introduction

Therefore, this thesis is proposing that there is a need to have a more reliable
task-based instrument to assess teachers’ ICT-literacy level, than a simplified
paper-based instrument.

There is also a distinct lack of a suitable ICT-literacy assessment instrumentation (Calvani,


Cartelli, Fini & Ranieri 2008; Dakich 2008). Calvani and his colleagues discovered that
instruments previously developed to assess ICT-literacy were not adequate in an educational
setting. Many instruments focus on the mastery of specific technical skills with little emphasis
on useful competences for teachers or the school children. In their study, they developed three
ICT-based tests: instant digital competence assessment (DCA), situated DCA, and projective
DCA. These tests can last from one to four hours, and are intended for students between 15 or
16 years of age. Instead of testing the students’ basic ICT skills, these tests focused on the
students’ ability to adapt to new ICT tools, and their ability to resolve common setbacks when
using ICT tools.

1.4. Research Aim

This research aims to develop and validate an enhanced task-based assessment (TBA)
instrument to evaluate ICT-literacy levels for Malaysian trainee teachers. Instead of using ‘pen
and paper-based’ self-efficacy questionnaires as many researchers previously employed, this
study uses a ‘task-based’ method suggested by the International ICT-literacy Panel (2002). The
panel was established in January 2001, when the Educational Testing Service (ETS) assembled
experts from education, government, non-governmental organisation (NGO) participants, and
the private sector from Australia, Brazil, Canada, France, and the United States. The main focus
of this panel was to study the growing importance of existing and emerging ICT tools and their
relationship to computer/ICT-literacy.

1.5. Research Objectives

To ensure the abovementioned aims are met, the primary objective of this research is to validate
the task-based ICT-literacy indicators. The proposed ICT-literacy indicators are based on:
previous studies of ICT-literacy and ICT-literacy assessments; ICT-literacy standards;
previously developed assessment instruments; and the MSS requirements.

Based on these task-based ICT-literacy indicators, an enhanced framework


for ICT-literacy assessment for trainee teachers will be proposed.

Investigating ICT-literacy assessment tool:


Page 6 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-1: Introduction

Using relevant research design, methodology and analysis, this study continues with the
development and validation of the proposed TBA instrument. Finally, the TBA instrument will
be tested on real participants and the outcomes will be reported.

Thus the research objectives for this study are to:


1. develop a TBA instrument to evaluate ICT-literacy levels of trainee teachers in
Malaysia;
2. validate the TBA instrument; and
3. propose a suitable ICT-literacy assessment framework to increase the ICT-
literacy levels of trainee teachers in Malaysia.

1.6. Research Questions

The major research questions for this study are:

1. What are the suitable ICT-literacy indicators for trainee teachers’ ICT-literacy
assessment?
2. How can the proposed task-based ICT-literacy assessment evaluate trainee
teachers’ ICT-literacy levels?

1.7. Structure of the Thesis

Chapter-1 Introduction: sets the stage for this thesis. This chapter provides an overview of
the research and starts with an introduction, which involves the increasingly prevalent use of
ICT tools and the subsequent need to understand whether this increased usage translates into
improved mastery when employing ICT in a teacher training setting. This is followed by a brief
introduction to the MSS project that motivates this study. The aims of the study, the research
objectives and the thesis questions are then proposed.

Chapter-2 Conceptual research framework: addresses the conceptual framework of this


research. The chapter introduces the significant body of existing work that serves as the
theoretical foundations for this thesis. The chapter is divided into three main sections
concerning the conceptual research framework: existing research and standards for ICT-literacy
assessment and the MSS; and task-based assessment.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 7
Chapter-1: Introduction

Chapter-3 Review of the literature: provides the theoretical base and ideology for the
subject content of the study. Based on the structure of the conceptual framework, this chapter
begins with a discussion on the definition of ICT-literacy itself and how it is important that it
must include both technical computer/ICT-literacy as well as information literacy. This is also
the beginning of Phase-1 of this study. The use of the higher education ICT proficiency model
that was developed by the Educational Testing Service (ETS) is further elaborated in this
chapter. The current situation of ICT-literacy in Malaysia and the MSS project are also
explained further. The chapter describes how this study relates to learning theories and how
cognitive learning theory is implemented. The TBA instrument design is further explained, and
the pre-identified ICT-literacy indicators from the literature are also listed.

Chapter-4 Design and methodology: justifies the use of the mixed method design. The
chapter elaborates on the three phases of the research design (Phase-1: Preliminary review;
Phase-2: Expert judgement on ICT-literacy indicators; and Phase-3: Instrument validation and
testing). There is further discussion on the Delphi technique and the Rasch item response theory
(IRT) model, which were applied for the qualitative part and quantitative part (respectively).
The chapter continues with detailed explanations on how the data was collected, adhering to the
proposed methodology, while the validity, reliability and ethical aspects of this study are also
discussed here.

Chapter-5 Data analysis and findings (expert judgement on ICT-literacy indicators):


constitutes Phase-2 of this study. This chapter concentrates on the expert judgement process.
The Delphi technique was implemented for this phase. The phases were divided into two parts:
1) validating the ICT-literacy indicators; and 2) validating the TBA instrument. Each part was
conducted in two rounds. Consensus was achieved among the experts after two rounds, thus
there was no need for a third round. The draft TBA instrument is now ready to be tested on its
intended participants.

Chapter-6 Data analysis and findings (instrument validation and testing): constitutes
Phase-3 of this study. The chapter describes the TBA instrument testing process. The pilot
testing and final instrument testing process are explained in detail here.

Chapter-7 Discussion of the results: brings the thesis back to the research questions. How
the research questions were answered by this thesis is discussed in detail. The significance of
this study to the research community and studies in ICT-literacy and how it benefits trainee
teachers in Malaysia are explained. The difference in approach and content of the proposed
TBA instrument from its predecessor is also examined.

Investigating ICT-literacy assessment tool:


Page 8 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-1: Introduction

Chapter-8 Conclusion: revisits the conceptual research framework. The outcome for each
of the three parts is discussed. Limitations of this study, unexpected findings and suggestions for
future research, were proposed.

1.8. Chapter-1 Summary

The chapter introduced the background and justification for this thesis. There is a significant
need in Malaysia’s educational and information communication technology research institutions
for a reliable instrument that can evaluate and identify the strengths and weaknesses of current
teacher training with reference to the ‘Smart School’ project. Findings from other studies also
suggest there is a need for a sound pedagogical embedded instrument that evaluates trainee
teachers’ level of ICT-literacy. In the next chapter, the topic of ICT-literacy and its assessment
is further elaborated and a conceptual framework for this thesis is proposed.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 9
Chapter-1: Introduction

Investigating ICT-literacy assessment tool:


Page 10 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

2
Conceptual Research Framework

2.1. Overview

This chapter discusses and justifies the research design used, reveals unexplored avenues of
research in the ICT-literacy literature, and elaborates on three vital areas: 1) existing research and
standards for ICT-literacy; 2) ICT-literacy assessment and the Malaysian Smart School (MSS)
project; and 3) self-efficacy assessments versus performance-based assessments. Part-1 of this
chapter sets the stage for this study where existing research on ICT-literacy and the ICT-literacy
standards are introduced. Part-2 establishes the need for an enhanced ICT-literacy assessment tool
for trainee teachers, with justifications on why a new instrument is needed. Part-3 argues the need
for a performance-based assessment tool and the reasons for the unsuitability of self-efficacy
assessment to assess ICT knowledge and computer skills. The proposed instrument development
theory is justified, and the conceptual framework for this thesis is then presented.

This chapter is divided into the following sections:


• Part-1: Existing research and standards for ICT-literacy;
• Part-2: ICT-literacy assessment and the Malaysian Smart School project;
• Part-3: Self-efficacy versus task-based assessment;
• The development theory: item response theory (IRT);
• The conceptual research framework; and
• Chapter-2 summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

2.2. Part-1: Existing Research and Standards for ICT-literacy

The literature that discusses ICT-literacy frequently uses the terms computer fluency,
information literacy, and digital competency synonymously (International ICT literacy Panel
2002; Bunz 2004; Williamson, Katz & Kirsch 2005; Markauskaite 2007; Calvani, Cartelli, Fini
& Ranieri 2008; Pernia 2008; Istance & Kools 2013; Kim & Lee 2013). Although the definitions
offered by researchers differ, there is an overarching theme that includes the same idea: ICT-
literacy not only describes a technical ability in using a computer; it includes other intellectual
competencies including solving problems and being critical, which a person must possess in
order to live comfortably in a knowledge-based society. Istance and Kools (2013) had proposed
that digital literacy includes information handling skills, and the capacity to judge the relevance
and reliability of web-based information. It has also been suggested that ICT-literacy will be the
catalyst for changing the way that education and training are conducted. Essential components of
ICT-literacy will influence the necessary skills and knowledge that improve the quality of
education for the future workforce (International ICT Literacy Panel 2002).

Research into ICT-literacy is divided into two paradigms: technical literacy and information
literacy. Technical literacy involves the participants’ ability to properly utilise the ICT tools and
applications. It could comprise participants’ expertise in using such tools as: computer
applications; digital still/video camera; scanner; social networking tools, etc. (Markauskaite
2007). Information literacy involves the acquisition of skills and knowledge in using ICT tools
to search, evaluate and judge online information (Livingstone 2004). It incorporates the
knowledge of responsible and ethical use of the online information. Studies in both areas focus
on the participants’ confidence levels, perception of their skills and ability, or dealing with the
digital divide.

Following the inauguration of the International ICT Literacy Panel in 2000, the International
Society for Technology in Education (ISTE) in 2008 proposed that in order to accurately assess
the level of ICT-literacy, such an assessment must include both technical and information
literacy (International ICT Literacy Panel 2002; International Society for Technology in
Education 2008). Two ICT standards that are the Information Literacy Competency Standards
for Higher Education developed by the Association of College and Research Libraries (ACRL)
and the Australian and New Zealand Information Literacy (ANZIIL) also echo the views of the
2000 International ICT Literacy Panel and 2008 ISTE (Association of College and Research
Libraries 2000; ANZIIL 2008).

Investigating ICT-literacy assessment tool:


Page 12 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

One framework that was frequently adopted in subsequent studies in ICT-literacy assessment is
the higher education ICT proficiency model that was developed by the International ICT
Literacy Panel that worked with the Educational Testing Service (ETS) (Williamson, Katz &
Kirsch 2005). The Panel listed seven critical digital skill development abilities for ICT-literacy,
which include:
• define: ability to use ICT tools to identify and appropriately represent information
needed;
• access: know about and know how to collect and/or retrieve information in digital
environments;
• manage: apply an existing organisational or classification scheme for digital information;
• integrate: interpret and represent information. It involves summarising, comparing and
contrasting information from multiple digital sources;
• evaluate: making judgements about the quality, relevance, usefulness, or efficiency of
digital information;
• create: generating information by adapting, applying, designing, inventing, or authoring
information in ICT environments; and
• communicate: communicate information properly in the context of ICT environments.

These seven critical skill development abilities serve as the backbone for this thesis. Along with
other identified digital skills that emerge from currently available assessment tools and MSS
requirements, they are later evaluated for their suitability for trainee teachers in Malaysia.

2.3. Part-2: ICT-literacy Assessment and the Malaysian Smart School Project

ICT is a powerful enabler for a country’s development goals. Research suggests that ICT plays a
significant role in the overall national development strategies worldwide. For example, in
Timor-Leste the costs of mobile and fixed phone services were too expensive for the average
citizen, affecting most of the country’s communication and development. In 2010 the Village
Telco Project was put forward for implementation in Dili, one of the country’s largest cities.
The program was a collaborative initiative to build a low-cost, community telephone network
able to be set up in minutes from anywhere in the world. Today, many Timorese use this
technology with significant demand for more nodes (network connection points).

This technology has also aided small businesses to grow and improve (Cadena 2010). The 2008
Australian labour force survey (Australian Trade Commission 2011) reports that approximately
400,000 Australians or 1.83% of the population are employed in ICT occupations or other

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 13
Chapter-2: Conceptual Research Framework

specific ICT-based industries; there are about 30,000 such businesses in Australia. The
Australian ICT market is worth nearly AU$100 billion, the fifth largest market in the Asia-
Pacific region. Between 2001 and 2008 the growth rate of Australia’s ICT market was estimated
at nearly 14%, faster than Japan, South Korea, Hong Kong, and Taiwan. Based on these
examples, both Timor-Leste and Australia have been influenced significantly by the ICT
economic sector, one of the advocates for these countries’ development.

Despite positive outcomes in ICT tools' development, the human dimensions in a human-
computer interaction (HCI) cannot be ignored (McKay 2008). With regard to new technologies
that are continually being developed and implemented, the question of how to better educate
children and young adults at school, to prepare them for the information age, is raised. One of
the most commonly cited arguments for using ICT in education is to better prepare individuals
for a workplace where ICT tools, especially computers and the Internet, are ubiquitous.
Technological literacy, which is the ability to use ICT efficiently and effectively, represents a
competitive edge in growing globalised job markets.

Many countries include ICT tools in teaching and learning strategies. The Australian
Government committed AU$2.2 billion over 6 years in its 2008 budget announcement of the
digital education revolution (DER), the purpose of which is to contribute to sustainable and
meaningful change in teaching and learning in Australian schools that prepare students for
higher education, training, and living and working in a digital world (DEEWR 2010).

The DER seeks the following outcomes for digital teaching and learning in Australia: 1) a
national, consistent approach to e-learning and ICT that enables collaboration between schools,
systems, and sectors; 2) effectively integrated e-learning in national curricula, assessment, and
reporting arrangements for schools; 3) teachers capable, confident, and effective at integrating e-
learning in the classroom; 4) e-learning and ICT arrangements that are sustainable and capable
of capitalising on the educational value of emerging technologies; and 5) high quality digital
learning resources readily discovered, accessed, used, and shared by schools.

Meanwhile, in Malaysia there are three core visions for the MSS project: 1) changing the
teachers’ role in an electronic classroom from being information providers to counsellors to help
students develop knowhow and judgement to select information sources; 2) enhancing students’
abilities to make the right judgement given an overwhelming array of choices; and 3) creating a
curriculum where people learn how to develop lifelong learning strategies (Multimedia
Development Corporation 2007a).

Investigating ICT-literacy assessment tool:


Page 14 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

Between 2006 and 2010 Malaysian teachers underwent a continuous professional development
(CPD) program in which they trained to enhance competency in integrating ICT in their
teaching and learning processes (Multimedia Development Corporation 2005).

To prepare for the nationwide implementation of the MSS project, schools were actively
encouraged to use their own initiative by using their own financial resources and expertise.
Teachers were considered the primary variable in the success of the project: if the teachers were
not trained well, the entire infrastructure – including the money invested – was in danger of
remaining idle.

Source: (Multimedia Development Corporation 2007b)

Figure 2.1. The Malaysian Smart School milestone (four waves)

In 2003 the MSS steering committee decided that the MSS project must be implemented in the
rest of the country. The flagship coordination committee was convened and resolved to: 1) take
note of the completion of the MSS pilot project; 2) affirm the MSS project as the basis for all
technology initiatives in education; 3) agree to the rollout of the MSS project; and 4) agree that
the Ministry of Education’s MSS steering committee would develop and recommend an optimal
rollout model with a phased implementation approach to the MSC Malaysia Implementation
Council chaired by the Prime Minister and the Cabinet (Multimedia Development Corporation
2005). However, the MSS project was halted at wave 2 (see Figure 2.1) due to the economic
downturn and political and policy changes that occurred. Although the pilot schools were
successful and the outcomes positive, the project’s national rollout failed to launch on schedule.

This, however, does not mean that the project was a failure. In fact, lessons learned from the
pilot project were valuable and the delay offered Malaysia time to reflect on the weaknesses and

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 15
Chapter-2: Conceptual Research Framework

allow discovery of newer, more suitable technologies. A consultative report based on feedback
from the pilot project was written. The feedback came from both the Ministry of
Education/Telekom Smart School team, which conducted the technological and infrastructure
review of the pilot project, and a group of experts from the local universities, commissioned to
evaluate the project’s human aspects.

In the report, the most important recommendations that emerged were in the areas of technical
maintenance and in the need for more supportive monitoring of schools. The report also
highlighted seven areas concerning human aspects that included: 1) teaching-learning materials;
2) teacher training; 3) response to change; 4) technology infrastructure; 5) help desk; 6) the
Smart School Management System (SSMS); and 7) student/parent feedback (Multimedia
Development Corporation 2005). Some of the concerns related to the limited use of the
teaching-learning materials, since some materials could not accommodate the students’ needs
and did not reflect the complete curricula. Almost half of the teachers surveyed agreed that in-
house training provided by schools was only moderately successful in achieving their
objectives. Training was also lacking on how to teach the smart way for newly trained teachers
who transferred to the MSS. In addition, the SSMS was reported to have problems with three of
its 31 SSMS components and only 16 of 31 components were being used by principals and
heads of schools. Parents were also not well informed about the unique features of their
children’s ‘Smart School’, though they knew that their children attended such a school.

The most significant feedback relates to teachers, since they were the crucial factor in the
project and had direct contact with students. Limited use of ICT-based teaching and learning
materials, ineffective in-house training, and lack of training for new teachers, were among the
difficulties suggested by the report. It was felt that the best way to resolve these problems was to
treat the problems at the root cause, specifically during a teacher’s university training years.

This thesis proposes that by empowering teachers with appropriate ICT skills
and ICT knowledge acquisition, it boosts their confidence and abilities to
effectively use ICT tools later in their teaching and learning classroom strategies.

Current trainee teachers are not confident with their own abilities, though most agree that it is
important for teachers to be ICT literate. Some are ill prepared and are not skilled well enough
because the integration of ICT skills into their teaching and learning instructional strategies
were not modelled sufficiently enough for them (Wilson 1990; Zhang & Martinovic 2008).
Albion (2003a) suggests that experience contributes to the development of enhanced skills and

Investigating ICT-literacy assessment tool:


Page 16 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

attitudes towards ICT tools, thereby increasing the possibility of trainee teachers applying those
acquired skills in the future.

This thesis proposes an assessment tool that evaluates teacher ICT-literacy levels
and identifies areas of weakness. Since in-house training proves to be
inadequate, this study investigates ICT-literacy assessment tools for trainee
teachers. Administered during their study, it is proposed that trainee teachers
will have a positive attitude and the appropriate ICT knowledge and skills
necessary to teach in a ‘Smart School’ environment by the time they are posted
to schools.

2.3.1. ICT-literacy assessment for teachers

In 2004 a group from the National Academy of Engineering (NAE) and the National Research
Council (NRC) conducted a study to determine the most viable approach to assessing
technological literacy in the USA for K-12 students, K-12 teachers, and out-of-school adults.
The report found that there was very little information available on the technological literacy of
teachers (NAE & NRC 2006). Although many school children have sophisticated technological
capabilities, they cannot be fully technologically literate unless their teachers are.

There is an urgent need for an in-depth study on this topic and development of a
suitable task-based (technological capability) assessment instrument.

The need for a comprehensive study is vital in the MSS project. The project includes a need for
integration of knowledge, skills, values, and attitudes suitable for a modern technological
society (Smart School Project Team 1997). The project suggests that ICT-literacy will be
emphasised to prepare students for their future. In the 1997 conceptual blueprint for the MSS
project, the project team listed the abilities expected of students that include competencies to
use ICT tools and sources to: 1) collect, analyse, process, and present information; 2) support
meaningful learning in various contexts; and 3) prepare students for employment (Smart School
Project Team 1997). The competencies listed by the team coincide with the definition of
technology literacy described by NAE and NRC; it called for an understanding of technology at
a level that enables effective functioning in a modern technological society (NAE & NRC
2006). In focusing on improving school children’s ICT-literacy, it is possible that current
Malaysian trainee teachers may not be adequately prepared to teach under this new approach. In
the USA, one of the limiting factors for technological studies in K-12 is inadequate preparation
of teachers to teach technology:

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 17
Chapter-2: Conceptual Research Framework

Schools of education spend virtually no time developing technological literacy in


those who will eventually stand in front of the classroom. ... without teachers
trained to carry out this integration, however, technology is likely to remain an
afterthought in American education (NAE & NRC 2002, p. 55).

2.3.2. Why trainee teachers in Malaysia?

Teachers must be prepared well and must attain an appropriate level of ICT-literacy before they
are capable of teaching this new generation of school children effectively. Developing
individuality, creativity, and initiative among these school children are vital in the MSS project.
In the MSS project, ICT tools are essential in making teaching and learning processes easier,
more fun, and effective. It also makes communication and management more efficient. In fact,
the MSS conceptual blueprint associates ICT with enabling technology for teaching and
learning, thus placing ICT as the facilitating tool (Smart School Project Team 1997).
Technology, a large amount of which includes ICT, is to be implemented in all parts of the
school, including MSS administration. The SSMS includes nine primary functions, namely:
school governance; student affairs; educational resources; external resources; facilities; human
resources; financial management; technology; and security. The SSMS facilitates everything
from day-to-day management and operation of the school to technology management and
security of school assets and data (Smart School Project Team 1997). SSMS is a comprehensive
software system developed by the Malaysian Ministry of Education to facilitate resource
management and administration. Teachers use the SSMS for classroom administration such as:
writing reports; taking attendance; setting timetables; and preparing lesson plans. Skills
acquisition at an appropriate ICT-literacy level is expected of trainee teachers who will soon
teach at an MSS.

Therefore, the effectiveness of the people involved, having the appropriate ICT skill
development, is imperative to ensure successful implementation of the MSS project.

This thesis believes that it is important for teachers to be trained appropriately


in the knowledge and skills necessary so that they can fulfil their roles in an ICT-
based classroom setting.

Technology, particularly ICT, acts as a catalyst in the process of transforming traditional


schools into smart schools (Zain, Atan & Idrus 2004).

2.3.3. Issues with ICT-literacy assessment

Studies assessing teacher competency when using ICT tools use both quantitative and
qualitative research designs with questionnaires and interviews being the most common
methods (Becker & Ravitz 2001; Kurbanoglu, Buket & Aysun 2006; Markauskaite 2007). In
developing competency assessment instruments, many researchers adopt the theory of self-

Investigating ICT-literacy assessment tool:


Page 18 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

efficacy, which refers to Bandura’s Social Cognitive Theory (Wood & Bandura 1989; Bandura
1991).

Self-efficacy is defined as one’s belief in their own ability to execute a certain action (Bandura
1997). Bandura espouses the idea that people’s abilities can be predicted through their level of
self-belief. Unless the person believes that they can accomplish an expected outcome, they have
little motivation to pursue or complete a given task.

The literature uses self-efficacy assessment extensively as a technique to assess computer


knowledge and skills (for example, Markauskaite 2007). However, self-efficacy assessment
does not fully explain actual performance (Thompson 1990). It is proven that self-efficacy
assessment has the ability to predict attitudes and feelings (readiness, confidence, and
preparedness); however, its accuracy in foretelling a person’s ability (cognitive, meta-cognitive,
and practical) is more complicated (Braddlee & Matthews-DeNatale 2006; Ballantine, McCourt
Larres & Oyelere 2007; Hilberg & Meiselwitz 2008). A number of studies report that there is a
propensity for people to either over or underrate themselves (Boud & Falchikov 1989; Larres,
Ballantine & Whittington 2003). This discrepancy is more apparent among high achievers
(those with more experience) and low achievers (those with less experience); high achievers
tend to underrate themselves while low achievers overrate.

Another example is a study by Forster, Dawson, and Reid (2005) who proposed to develop an
assessment tool to measure Australian teachers’ preparedness to teach secondary school science
using ICT. One of the challenges the research team faced was finding a single Likert scale to
represent computer ICT-literacy skills and knowledge. They combined two scales, one for ICT
skills and one for ICT knowledge acquisition, and acknowledge that a limitation of their study
was that self-efficacy questionnaires only measure a respondent’s perception of their skills and
knowledge. Self-efficacy questionnaires do not explain the extent to which respondents
demonstrate knowledge and competencies (Forster, Dawson & Reid 2005).

However, as previously mentioned, in 2002 the International ICT Literacy Panel, that
conducted a study on ICT tools and their relationship to ICT-literacy, produced a report of the
study and suggested seven critical components for ICT-literacy, which formed the higher
education ICT proficiency model (International ICT Literacy Panel 2002; Williamson, Katz &
Kirsch 2005). The report also suggested that a richer method of collecting ICT-literacy
capability data is to use a series of computer-based simulative tasks that integrate both the
cognitive and technical domains since ‘valuable information will be lost if it is not conducted in
real-world settings’ (International ICT Literacy Panel 2002, p. 21).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 19
Chapter-2: Conceptual Research Framework

The purpose of this thesis is to develop a TBA tool as recommended by the


International ICT Literacy Panel (2002).

By discovering trainee teachers’ weaknesses in ICT-literacy, teacher training programs can


then effectively target the development of these skills in their own program (Caplan &
Graham 2008).

2.3.4. The need for a different instrument to assess trainee teachers’ ICT-literacy

Several studies have been conducted on students (Katz 2007; Russell & Finger 2007), trainee
teachers, and in-service teachers (Graham & Glen 1997; Dawes 2000; Luke 2001; Knezek &
Christensen 2002; Jamieson-Proctor, Burnett, Finger & Watson 2006; Shattuck et al. 2011).
Most involve participants’ perceptions and attitudes on their preparedness to integrate ICT as
tools, or teaching ICT as the class subject. Findings have been contradictory. For instance:
Albion (2003b, 2003a) found that compared to their predecessors, trainee teachers are prepared
and willing to use and integrate ICT to enhance their instructional strategies. Markauskaite
(2007) suggests that trainee teachers are between quite confident and moderately confident with
their basic and advanced technical computer skills. There is also a suggestion that trainee
teacher reluctance to using ICT in their teaching practice is the result of insufficient ICT
pedagogical training in teacher training institutions. Cuckle and Clarke (2002) found that trainee
teachers do have good ICT skills for personal academic use, yet when it comes to implementing
these skills in a classroom environment, they are ill prepared.

Many barriers inhibit the professional practice of ICT instruction for in-service teachers.
Teachers’ confidence levels are an important factor: teachers with low or no confidence avoid
using ICT (Dawes 2000; Jamieson-Proctor, Burnett, Finger & Watson 2006; Shahadat, Hasan &
Clement 2012; Tsai & Chai 2012). Technical support from the school, training quantity, and
training quality, also correlate with teachers’ competencies and anxieties (Graham & Glen 1997;
Tsai & Chai 2012). Other than that, resistance to change, particularly for the older teachers, and
the way ICT is used at school is also another dilemma. Most teachers prefer to use ICT to
enhance rather than transform their current curriculum (Jamieson-Proctor, Burnett, Finger &
Watson 2006). Rather than changing their old teaching module to include ICT in their teaching
activities, most teachers felt that it would be easier to use ICT to accommodate and enhance
their existing/older teaching modules. For example: instead of using the overhead projector
(OHP), the teachers would input their teaching notes into a digital presentation application (for
example, MS PowerPoint); or instead of manually creating a class timetable, the teachers are
now using spreadsheets. It is proposed that these teachers may not have the initiative or time to

Investigating ICT-literacy assessment tool:


Page 20 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

explore other aspects of ICT that they could use to improve their teaching activities (BECTA
June 2004).

In addition to that, Tsai and Chai (2012) proposed another barrier that should be discussed, that
is, the teachers’ design thinking. If teachers had sufficient support over both their intrinsic and
external barriers that still cannot guarantee that technology integration will happen in their
teaching and learning activities. Teachers must also have the ability to reorganise or create
learning materials and activities, adapting to the instructional needs of the different contexts or
varying groups of students.

The quality of the research design in ICT-literacy studies is also an issue because too many
research studies applied the self-assessment methodologies (see Compeau & Higgins 1995;
Torkzadeh & van Dyke 2001; Durndell & Haag 2002; Jamieson-Proctor, Burnett, Finger &
Watson 2006; Markauskaite 2007; Ball & Levy 2008). Computer self-efficacy involves a belief
in one’s capability to use a computer. Markauskaite’s (2007) study, for example, which was
based on two products of social-cognitive theory – the self-efficacy theory and the theory of
planned behaviour – employs the ICT-literacy model proposed by the International ICT
Literacy Panel (2002). Markauskaite utilised the self-efficacy theory in her questionnaire
design. Each test-item in her questionnaire started with the phrase ‘I believe I have the
capability …’ and was measured using a six-point Likert scale. Murphy, Coover and Owen
(1989) included a 32-test item computer self-efficacy scale (CSE) to measure perceptions of
capability pertaining to specific computer-related knowledge and skills. Since then, the ICT
self-efficacy scale has been refined and modified according to current information technology
(IT) needs.

Other studies applied a combination of both self-efficacy methodology and a more hands-on
evaluation of the participants ICT-literacy. One such example was in Wong’s (2002) study. In
her study, she developed an IT preparedness assessment instrument, which measured teachers’
preparedness in using ICT using three different measures: a self-efficacy instrument to evaluate
their attitude towards using ICT; a hands-on instrument to assess their ICT skills; and ICT
knowledge, exam-based questions that consist of 25 multiple choice questions which tested the
teachers’ knowledge on ICT and computers.

Nonetheless, the hands-on instrument lacks flexibility and does not encourage critical and
analytical thinking, as it gives the teachers step-by-step instructions of what was to be
performed, and the exam-based questions may not represent the teachers’ actual ICT knowledge
as the multiple choice questions may allow guessing.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 21
Chapter-2: Conceptual Research Framework

This thesis proposes that a TBA instrument is necessary so that it allows


participants to complete a given task independently without being told how it is
done.

To cope with today’s technological demands, people need to acquire more than just the basic
ICT skills and knowledge. They need to know how to use their acquired knowledge and skills
by: thinking critically; applying knowledge to new situations; analysing information; generating
new ideas; communicating; collaborating; solving problems; and making decisions. These skills
can provide both flexibility and security. People who can learn new information are able to use
software programs and conceive new ways of doing things, and have much better prospects than
those who cannot (Partnership for the 21st Century Skills 2002).

The lack of ability to think critically and analytically, and also to make decisions, is apparent in
Malaysian school students. Earlier studies have shown that students’ critical and analytical
thinking abilities in Malaysia were between below satisfactory and fair (Zaharah 1995; Razali
1999). In a study of mathematics, Razali (1999) found that students performed excellently with
questions that required lower level thinking skills. Yet when comparing, contrasting, and
interpreting skills were involved, students performed less than satisfactorily. Zaharah (1995)
discovered that the content in Islamic studies textbooks for upper secondary students does not
encourage decision-making skills.

It is further noted that Malaysian teachers may also lack the ability to teach these skills or are
less prepared to teach by incorporating these skills in teaching and learning activities (Rajendran
2001; Rosnani 2002). Teachers may understand the importance of teaching critical thinking to
students, yet some appear to not have the necessary instructional strategies to teach it (Rosnani
2002). Rosnani found a correlation between perceptions of teaching critical thinking and teacher
practice. Experienced teachers, or those with more exposure to the theories and skills of critical
and creative thinking, respond more positively to change.

These issues corroborate this study’s need for an enhanced TBA instrument that
is flexible; not based on self-assessment; and includes test-items that test
cognitive skills. The tasks for the TBA instrument should focus on familiar/
normal, computer-based activities for a classroom environment that teachers
usually find in their schools.

Investigating ICT-literacy assessment tool:


Page 22 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

2.3.5. Cognitive and non-cognitive proficiencies in ICT-literacy

Cognitive learning emerges as a common theme in existing ICT-literacy literature. Well-known


and respected psychology researchers like Bruner, Gardner, and Piaget (Bruner 2006) champion
the fundamental importance of cognitive abilities (such as perception, thought, personality,
creativity, intuition, language, symbol, and motivation) in teaching and learning. Bruner
describes the current methods used for teaching and learning activities in most schools as a
"passive process and depriving our students from thinking" (Bruner 2006, p. 26).

Cognitive learning was first mentioned in 1956 by Benjamin Bloom who contributed to the
classification of educational objectives by organising them according to cognitive complexity
(Bloom 1956; Atherton 2005). Bloom led a group of colleges in a study and later introduced a
framework known as Bloom’s taxonomy, which identifies three learning objective domains
(Bloom 1956; Atherton 2005): 1) cognitive domain, which refers to knowledge structures; 2)
affective domain, which refers to attitude structures; and 3) psychomotor domain, which refers
to physical skill development. The committee explains each domain (except psychomotor) by
listing the required category, starting with the simplest and moving to the most complex (Clark
2004). Yet for the cognitive domain, Bloom’s taxonomy also listed six major categories:
knowledge; comprehension; application; analysis; synthesis and evaluation (Krathwohl 2002).
Each of these categories was broken into subcategories as shown in Table 2.1.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 23
Chapter-2: Conceptual Research Framework

Table 2.1. Structure of the original taxonomy of the cognitive domain

1.0 Knowledge
1.1 Knowledge of specifics
1.11 Knowledge of terminology
1.12 Knowledge of specific facts
1.2 Knowledge of ways and means of dealing with specifics
1.21 Knowledge of conventions
1.22 Knowledge of trends and sequences
1.23 Knowledge of classifications and categories
1.24 Knowledge of criteria
1.25 Knowledge of methodology
1.3 Knowledge of universals and abstractions in a field
1.31 Knowledge of principles and generalisations
1.32 Knowledge of theories and structures
2.0 Comprehension
2.1 Translation
2.2 Interpretation
2.3 Extrapolation
3.0 Application
4.0 Analysis
4.1 Analysis of elements
4.2 Analysis of relationships
4.3 Analysis of organisational principles
5.0 Synthesis
5.1 Production of a unique communication
5.2 Production of a plan, or proposed set of operations
5.3 Derivation of a set of abstract relations
6.0 Evaluation
6.1 Evaluation in terms of internal evidence
6.2 Judgements in terms of external criteria
Source: (Atherton 2005)

Later, Bloom’s taxonomy of the cognitive domain was revised by Anderson et al. (2001) and
Krathwohl (2002). In their revised taxonomy, instead of a one-dimensional framework, the
taxonomy was separated into two dimensions. This was due to the fact that the first category
(knowledge) embodied both noun and verb aspects (Krathwohl 2002). The noun became the
basis for the knowledge dimension, while the verb forms the basis for the cognitive process
dimension. All the subcategories under the knowledge categories were grouped into their similar
functions and category, and were re-named as factual knowledge, conceptual knowledge and
procedural knowledge. Another category was added, this being meta-cognitive knowledge
(Krathwohl 2002). This new category involves knowledge about cognition in general and also
knowledge about one’s own cognition. For the second dimension, that is, the cognitive process
dimension, the six original categories were retained. However, three categories were re-named
and two were interchanged (Krathwohl 2002). Also, the fact that the revised taxonomy could
represent any learning objectives in two dimensions suggested the possibility of a two-
dimensional table (Table 2.2). The table can be used to examine and align the curriculum and
also establish educational opportunities that have been missed. It can help teachers decide where
and how to improve their curriculum plan and delivery of instruction (Krathwohl 2002).

Investigating ICT-literacy assessment tool:


Page 24 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

Table 2.2. Anderson and Krathwohl’s revised taxonomy table

Cognitive Process Dimensions


Knowledge 1. Remember 2. Understand 3. Apply 4. Analyse 5. Evaluate 6. Create
Dimensions
Factual
Conceptual
Procedural
Meta-cognitive

Based on a similar idea, Gagne (1985) also improved Bloom’s taxonomy of the cognitive
domain by dividing the cognitive domain into three parts, which view different aspects of
cognitive ability: intellectual skills (a learner’s ability to interact with environments using skills
such as discrimination, rule-using, problem-solving, or concrete concepts); cognitive skills
(internal process by which a learner controls ways of thinking and learning); and verbal
information (a learner’s ability to state or recall previously learned material) (Figure 2.2).

Gagne was greatly influenced by the theorists who preceded him. A major contribution of
Gagne has been his views regarding the varying categories of learning outcomes and their
relevance for instruction. He calls these categories the domains of learning and has identified
different principles for designing instruction for each domain. He refers to these principles as
the conditions of learning (Gagne 2000). The conditions of learning are important as:

1. they are needed to distinguish the parts of a content area that are subject to different
instructional treatments;
2. they are needed to relate the instructional procedures of one subject to those of
another, as similar parts of instructional procedures can be found among different
content areas; and
3. different domains of learning require different techniques of assessment of learning
outcomes. One cannot use a single way of measuring what has been learned.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 25
Chapter-2: Conceptual Research Framework

Bloom Gagne
• Problem-solving
• Rule-using
Intellectual skills
• Concrete concept
• Discrimination
Cognitive domain Cognitive strategy

• Labels and facts


Verbal information • Bodies of knowledge

Affective
domain/Attitude

Psychomotor/Motor
skills

Figure 2.2. Bloom’s taxonomy and the Gagne five learned capabilities (Bloom 1956; Gagne 1985)

Gagne was also the first to suggest that aside from external learning, internal learning
conditions must be met for the acquisition of each learned capability. The internal learning
conditions are associated with previously learned capabilities of the learner, while external
learning conditions relate to the stimuli that are presented externally to the learner (Gagne
2000).

Based on previous ICT-literacy studies and on cognitive learning theories, this


thesis included both Gagne’s and Krathwohl’s cognitive learning dimensions in
the designing process of the TBA instrument, to ensure that every aspect of
cognitive learning was included.

Thus this thesis included the three cognitive domains of learning from Gagne (verbal
information, cognitive strategy and intellectual skills), and added the meta-cognitive dimension
as suggested by Krathwohl, to guide the researcher in the process of designing the TBA
instrument.

Few ICT-literacy studies already suggest a model or framework on specific disciplines or


domains of specialty (see Williamson, Katz & Kirsch 2005; Katz & Macklin 2007;
Markauskaite 2007; Ball & Levy 2008; Calvani, Cartelli, Fini & Ranieri 2008; Cartelli 2008);
however, none offers a sound pedagogical direction from the findings.

Investigating ICT-literacy assessment tool:


Page 26 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

For the assessment to remain relevant and different from other ICT-literacy
assessments, the assessment instrument proposed here will relate to teachers’
teaching and learning activities.

2.4. Part-3: Self-efficacy versus task-based assessment

Considering the discussion in section 2.3.3 of this chapter, it is clear that this study seeks a
different method of assessing ICT-literacy that is suitable and appropriate for a trainee teacher
environment, particularly for Malaysia. The following sub-sections examine the differences
between task-based assessments versus self-efficacy assessment.

2.4.1. Self-efficacy

Initiated by his seminal paper on self-efficacy in 1977, Bandura argues that:

Expectations of personal efficacy determine whether coping behaviour will be


initiated, how much effort will be expended, and how long it will be sustained in the
face of obstacles and aversive experiences (Bandura 1977, p. 191).

He suggested that human functioning is predicted through inner self-belief; unless people
believe they can accomplish an expected outcome, they have little motivation to pursue or
complete a task. Even if a person understands the actions necessary to achieve an outcome,
entertaining doubts about performing the actions precludes influences on behaviour (Bandura
1977). Bandura proposed that the most significant determinant of whether people would engage
in any feared behaviour is the extent to which they perceive themselves as competent to carry
out a particular task (in Bandura 1982). Self-efficacy therefore influences human functioning in
many ways including: the choices people make; their thought patterns; emotional reactions;
effort; perseverance; and resilience (Pajares 2002).

Self-efficacy or efficacy expectations develop from four sources: mastery experience; vicarious
experience; social persuasion; and somatic and emotional states (Bandura 1977, 1994; Pajares
2002). Self-beliefs are based on actions and past experiences, observations and comparisons
with others who have similar qualities, influences from effective persuaders, and emotional
reactions to capabilities. Researchers established that self-efficacy is the best predictor of
behavioural outcomes compared to other motivational constructs (Pajares 2002).

By contrast, Eastman and Marzillier (1984) argue that the theoretical construct of Bandura’s
self-efficacy is ambiguous and ill defined; and it contains a number of methodological

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 27
Chapter-2: Conceptual Research Framework

deficiencies. They contested that efficacy expectations and outcome expectations are not distinct
as claimed by Bandura.

It is impossible to exclude considerations of outcome from any assessment of personal self-


efficacy. Predicted outcomes influence self-efficacy. If the expected outcome is negative (e.g.
something feared), efficacy expectations are expected to be lower despite mastery experience or
social persuasion. Eastman and Marzillier argue that in Bandura’s snake phobic experiment,
outcome expectations of being bitten by the snake surely affect participant efficacy
expectations. As a snake phobic, the participant not only thinks about their ability to "hold a
reptile without any risk of being bitten by gripping it firmly behind the head" (in Eastman &
Marzillier 1984, p. 218); they also have the snake to consider.

Methodologically, Eastman and Marzillier (1984) question the scale used by Bandura. The
instrument uses a 100-point probability scale representing the probability that participants
believe they are able to perform a specific task. Yet the scale does not start at zero and the
lowest scale is 10 points, which corresponds to a judgement of quite uncertain. The scale is also
imbalanced, with the intermediate judgement placed at 50 points. There is insufficient support
for the claim that there is a relationship between predicting how participants will behave on a
specific task with actual performance (Eastman & Marzillier 1984).

2.4.2. Task-based assessment

Task-based assessments require a person performing an activity that simulates performance to


engage in behaviour outside the test (Robinson & Ross 1996). The idea is to gather a
demonstration of the scope of knowledge that a subject has acquired rather than simply testing
the accuracy of responses on a selection of questions.

As such, the task-based assessments can be divided into performance-referenced,


task-based and system-referenced task-based.

In a strictly performance-referenced, task-based test, measurement of success or failure is based


on the ability to perform a given task. Performance-referenced, task-based tests have been used
in the healthcare profession for many decades. The four common methods used here are: written
clinical simulations (patient management problems); computer-based clinical simulations; oral
examinations; and standardised patients (live simulations) (Swanson, Norman & Linn 1995).
There is no fixed definition for this type of testing, yet a common theme is to emphasise testing
complexity and higher order knowledge and skills in real-world contexts, accompanied by open-
ended tasks that require significant time to complete (Swanson, Norman & Linn 1995).

Investigating ICT-literacy assessment tool:


Page 28 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

Task-based tests can also be system-referenced if the task is used to obtain samples of
participants’ linguistic knowledge or generalised verbal ability (Robinson & Ross 1996). Since
the late 1980s a number of researchers have proposed changes in language teaching that include
task-based instruction (Prabhu 1987; Long & Crookes 1992; Skehan 1996). It was not
anticipated that task-based instruction would lead to higher language competency. The
expectation was that reasoning activity in task-based instruction supports continuous
engagement; this engagement is a favourable condition in developing grammatical competencies
(Prabhu 1987).

Task-based tests allow the participant to demonstrate acquired knowledge capacity and an
ability to correlate tasks with the theories or concepts learned previously. Instead of judging
knowledge acquisition through a series of multiple choice selections or self-evaluation, task-
based assessment forces participants to place knowledge into a context that can be understood
and explained (Teachnology Inc 2011). Despite these advantages, task-based testing is difficult
to implement in large settings in comparison to standard multiple question formats of self-
assessment surveys. Larger populations make the timing and cost of task-based testing difficult,
though the overall benefit to students often outweighs those concerns (Teachnology Inc 2011).

The next section explores the theory that underpins the task-based ICT assessment instrument.
The concept of item response theory (IRT) and the justification for its use in this study is
discussed.

2.5. The Development Theory: Item Response Theory (IRT)

The IRT has been the focus of intense research and development activity in educational,
psychological measurement and health-related research during the past decade (Jones &
Hambleton 1992; Nunnally & Bernstein 1994; Masters & Keeves 1999; Chen, Lee & Chen
2005; Betz & Turner 2011). IRT is based on the notion that the probability for a person to get a
test-item correct is actually based on their ability that is measured by an assessment instrument.
As such, it may be assumed that a person with higher intelligence may be more likely to provide
a correct response to a given test-item on an intelligence test. The relationship between these
test-items and the person’s ability is assumed to be direct, and the test-items are assumed to be
conditionally independent. In other words, responses to the test-items depend entirely on the
participant’s ability, while any covariance among the test-items is due to their common
dependence on the assumed ability (Cyr & Davies 2005). Additionally, the main purpose of IRT
is to provide an evaluation that identifies how well an assessment tool works, and how well an
individual test-item works. The most common application of IRT is in the education setting for:

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 29
Chapter-2: Conceptual Research Framework

developing and refining exams; maintaining banks of test-items for exams; and comparisons
between exam results over time (Mason, Moulton, Russell & Wilmot 2009; Obinne 2011).

IRT exists in two formats (Figure 2.3): 1) dichotomous and 2) polytomous. IRT consists of a
number of mathematical models. These models are based on different sets of assumptions and,
therefore, are likely to fit the observed data somewhat differently. Selection of one IRT model
over another, therefore, should depend in part on the ‘goodness-of-fit’.

Item Analysis

Classical Test Item Response


Theory (CTT) Theory (IRT)

Dichotomous Polytomous
Format Format

1 Parameter Nominal
Logistic/ Model
Rasch Model
Partial Credit
2 Parameter Model/Rasch
Logistic Model

3 Parameter Rating Scale


Logistic Model/Rasch
Model

Figure 2.3. Item analysis theories

One of the most commonly utilised models of IRT is the Rasch model. When the Rasch IRT
model is employed, the objective is to obtain data that ‘fits’ the model. The rationale for this
perspective is that the Rasch IRT model embodies requirements that must be met in order to
obtain measurement. Misfitting test-items for the data need to be discarded or adjusted. The
theoretical underpinning of the Rasch IRT model believes that a test analysis would only be
worthwhile if it were individual-centred, with separate parameters for the test-items and
participants. This requirement creates a transition from ‘population-based’ classical test theory
(CTT), with its emphasis on standardisation and randomisation, to IRT with its probabilistic
modelling of the interaction between an individual test-item and an individual participant’s
performance (Van der Linden & Hambleton 1997).

The Rasch IRT model (see Figure 2.4) is based on the 1-parameter logistic (1PL), while the
partial credit model and rating scale model are an extension of the Rasch IRT model’s

Investigating ICT-literacy assessment tool:


Page 30 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

dichotomous format. In the Rasch dichotomous model, if a given test-item is successfully


completed, the person will score one on the test-item. If it is not completed, then the score is
zero. No credits are given to almost correct or partially completed test-items. Central to the idea
of the Rasch IRT model is the probability principle.

A person’s response to a particular test-item is never certain. It is always influenced by human


error. Thus a probabilistic approach to cognitive assessment must be employed. In the Rasch
IRT model, probabilities are introduced through consideration of the odds that a person would
give a correct response to a test-item. The Rasch dichotomous model equation can be written as:

exp (βn – δi1)


Ønil =
1 + exp (βn – δi1)

Where Ønil is person n’s probability of scoring one rather than zero on item i, βn is the ability of
person n, and δ i1 is the difficulty of the one step in item i. This relationship is illustrated in
Figure 2.4, and is also known as the item characteristic curve (ICC). During a test-item
calibration test, both the test-item's and the person’s performance must conform to the ICC.
Non-conforming test-items or a person’s performance will be rejected or re-evaluated.

1.0

Ønil

0.5
Probability

δi1

0.0
β

Figure 2.4. Item characteristic curve (ICC)

This basic dichotomous Rasch IRT model, which involves the parameter for a person’s ability
(β) and item difficulty (δ), can also be extended to include partial credit scales (τ). A partial
credit model (PCM) is used in a situation when a person’s attempt at completing a test-item can
be grouped into several ordered responses. PCM represents a person’s ability as "… a location
on a continuum of increasing competence" (Masters 1999, p. 101). The ordered responses can be
defined in many different ways. The most common methods are by: 1) levels of partial
understanding; and 2) multistep problems.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 31
Chapter-2: Conceptual Research Framework

For levels of partial understanding, it is the result of an examinee’s level of understanding of the
test-item. A set of categories for the test-item is built upon the responses given by the examinee
(Masters 1999). In this thesis, a few of the categories for the test-item were developed based on
this method. For example in Task-2, the participants were required to calculate a list of students’
grades. The list of students’ marks was created in a spreadsheet file. Though the participants
were allowed to use any calculating method that they felt comfortable to use, it was initially
anticipated that the participants would either use the basic spreadsheet formula or the advanced
spreadsheet formula in order to calculate the grade. However, after the pilot test, it was
discovered that none of the participants used the advanced spreadsheet formula, whilst some
participants used a calculator to manually count each grade. Hence the categories for this test-
item were changed to include this alternative calculating method. The scoring value of 1 was
given for the use of a calculator and the scoring value of 2 was given for using a spreadsheet
formula.

Multistep problems were presented in a complex problem that would require the completion of
a number of steps (Masters 1999). Credit was given to the number of task-related steps that the
examinee manages to complete. Looking at the instrument that the researcher is developing for
this thesis, aside from the levels of partial understanding method, a number of different test-
items were based on this method.

For example, in Task-4 (a) of this study, the participants were asked to register to an online
discussion forum and post a reply to a pre-identified thread. Originally, two steps were
identified for this task and credit was given for each step achieved. The task-related steps were:
1) register a new account; and 2) post a reply. However, a number of participants posted a reply
to the wrong thread. Considering the response, it was not a completely wrong answer. The
participant did post a reply, but to the wrong thread. Thus another step was added to the task.
The third step was therefore: 3) reply to the correct thread. The value of 1 was given as the
participant’s ability to register to a new discussion forum account and the scoring value
increases with each step that they correctly completed.

2.5.1. Why item response theory (IRT)?

The name IRT is due to the focus of the behaviour of the test-item, as opposed to the test level
focus of classical test theory, by modelling participants’ responses based on their given ability for
each test-item. The term item is used because many test questions are not actually questions.
They might be multiple choice questions (MCQ) that have both incorrect and correct responses;
they can also be common statements on questionnaires that allow respondents to indicate a level

Investigating ICT-literacy assessment tool:


Page 32
Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

of agreement (a rating or Likert scale); or they can be patient symptoms scored as present/
absent (dichotomous value) (Fan 1998).

When developing a test instrument there are basically two issues that test developers may be
concerned about: 1) the quality of the test instrument; and 2) how examinees will respond to it.
In order to determine the validity and reliability of a test instrument, the two approaches
(classical test theory and item response theory) are often used to analyse the test data. Both of
these theories provide measures of validity and reliability. Both theories are able to predict
performance outcomes of the tests by identifying parameters of test-item difficulty and the
ability of the examinees. Basically, there are no critical problems with CTT. However, there are
a few shortcomings that demand the need for another alternative theory to analyse the data for
this thesis.

CTT has been used for educational and psychological measurement for a long time. CTT
introduces three basic measurement concepts: 1) test score or observed score; 2) true score; and
3) error score. CTT analysis suggests that the observed test scores (X) are composed of a true
score (T) and error score (E): X = T + E, where true score and error score are independent. CTT
assumes that each individual has a true score that would be obtained if there were no errors in
the measurement. The difference between the true score and the observed score were results
from measurement error. Error is often assumed to be a random variable having a normal
distribution. In theory, the standard deviation of the distribution of random errors for each
individual tells about the degree of measurement error. It is usually assumed that the distribution
of random errors will be the same for all individuals. CTT uses the standard deviation of errors
as the basic measure of error. Usually this is called the standard error of measurement. The
larger the standard error of measurement, the less certain is the accuracy with which an attribute
is measured (Magno 2009). Conversely, small standard errors of measurement tell that an
individual score is probably close to the true score.

CTT is sample dependent and utilises the traditional sample dependent statistics. These include:
test-item difficulty (p-value) and test-item discrimination estimates; distractor analyses; test-
item intercorrelations, etc. Test-item difficulty is based on the frequency of correct responses.
Higher test-item difficulty values are obtained when the examinee samples are of low or
average knowledge, while lower test-item difficulty values occur when the examinee samples
are of above average knowledge. In terms of discrimination estimates, higher values tend to be
obtained from varied examinee samples, and lower values are associated with homogeneous

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 33
Chapter-2: Conceptual Research Framework

samples. Such sample dependency relationships reduce the overall utility of these statistics
(Hambleton & Jones 1993; Magno 2009).

Meanwhile, IRT is not dependent on the sample used to generate the parameters, and is assumed
to be invariant across different groups within a research population and across populations
(Hambleton & Murphy 1991; Swaminathan 1999). One of the features of IRT is that the test-
item attributes and the examinees attributes are comparable. The values from these two
measures are converted into a common scale that is ‘logit’. Logit is the logarithmic scale of the
odds ratio (p-values). The odds ratio is assumed to be normally distributed and is transformed
onto a linear scale by the logarithmic function of (Andrich 1999; Wright 1999):

Odd ratio = log [(1 – probability ratio)/probability ratio]

This formula frees IRT from the dependency on the examinees’ ability or knowledge (Wright
1999). This means that the difficulties of test-items can be compared even if the examinee
comes from different levels of ability and knowledge acquisition. Trait ability or proficiency
level parameters are independent of the set of test-items administered to the examinees. The
trait estimates (or the standard error of measurement) can be determined at each trait level
(Swaminathan 1999). IRT assumes that it is possible to describe mathematically the relationship
between a person’s trait ability and performance on a test-item, based on the probability
principle (Stocking 1999). This mathematical description is illustrated as the ICC. ICC allows
the flexibility of having both trait ability and performance on a test-item being represented on
the same scale using a logit scale. It can predict, for example, how a low achiever, an average
person and a high achiever would perform on a certain test-item. If a person’s ability is known,
their performance on a test-item can be predicted without administering the test-item to that
person (Wu & Adams 2007).

ICC also provides a way of measuring the quality of the test-items by confirming the suitability
of the test-items for examinees and how well they measured the examinees’ ability. A test
item’s location and discrimination in the ICC describes the person’s ability (β) needed to pass
the test-item and also how strongly related the person’s ability (β) is with the test-item. For the
test-item location, the higher the person’s ability (β), the higher is the probability level the
person needs to be in order to pass the test-item. Whilst for test-item discrimination, test-items
with high discrimination estimates are better at differentiating examinees around the location
point, small changes in the person’s ability (β) leads to large changes in the probability level
(Figure 2.5).

Investigating ICT-literacy assessment tool:


Page 34 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

1.0

Discrimination

0.5
Probability Location

0.0
Person’s ability (

Figure 2.5. Item location and discrimination estimates in the ICC

2.5.2. Issues with classical test theory (CTT)

The CTT is the most widely used form of test-item analysis. CTT can be performed on a test as
a whole rather than on each test-item. Therefore, the outcome of the analysis applies to those
groups of participants and on that collection of items only. Reliability is seen as a characteristic
of the test-items and of the variance of the trait it measures. test-items are treated as random
replicates of each other and their characteristics are expressed as correlations with total test-item
score or as factor loadings on the supposed underlying variables of interest. Characteristics of
their properties are not analysed in detail (Revelle 2011).

Some psychometricians like Frederic Lord proposed that measurement practices would
be enhanced if test-items and test statistics could be made sample independent too, thus
were suggesting the preference of biserial correlations over point biserial correlations in
estimating test-item discrimination because the former are more invariant over
participants’ samples. Basically, CTT statistics such as test-item difficulty and test-item
discrimination and test statistics such as test-item reliability depend on the participants’
sample in which they are obtained. Of course, this is not necessarily a problem, and thousands
of excellent tests have been constructed in this way; although special emphasis is placed on
obtaining suitable participant samples for obtaining test-items and test statistics and
producing statistically parallel tests (Hambleton & Jones 1993). Parallel tests are defined as
tests that measure the same content and for which participants have the same true score and
where the size of the errors of measurement across forms are equal.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 35
Chapter-2: Conceptual Research Framework

CTT is also most useful when the participant samples are similar to the participant population
for whom the test is being developed. The utility of the test-item statistics may decline in the
case where the sample differs in some unknown way from the population, and this could easily
happen in a field test. Embretson and Reise (2000), in their book on IRT for psychologists,
discuss the old and new rules for developing a new test measurement (Table 2.3).

The new rules were derived from IRT and they suggest that many old rules must
in fact be revised, generalised or abandoned altogether.

Table 2.3. Rules of measurement

Rule The Old Rules The New Rules


1 The standard error of measurement applies to The standard error of measurement differs
all scores in a particular population. across scores (or response patterns), but
generalises across populations.
2 Longer tests are more reliable than shorter Shorter tests can be more reliable than longer
tests. tests.
3 Comparing test scores across multiple forms Comparing test scores across multiple forms
is optimal when the forms are parallel. is optimal when test difficulty levels vary
between persons.
4 Unbiased estimates of item properties depend Unbiased estimates of item properties may be
on having representative samples. obtained from unrepresentative samples.
5 Test scores obtain meaning by comparing Test scores have meaning when they are
their position in a norm group. compared for distance from items.
6 Interval scale properties are achieved by Interval scale properties are achieved by
obtaining normal score distributions. applying justifiable measurement models.
7 Mixed item formats lead to unbalanced Mixed item formats can yield optimal test
impact on test total scores. scores.
8 Change scores cannot be meaningfully Change scores can be meaningfully
compared when initial score levels differ. compared when initial score levels differ.
9 Factor analysis on binary items produces Factor analysis on raw item data yields a full
artefacts rather than factors. information factor analysis.
10 Item stimulus features are unimportant Item stimulus features can be directly related
compared to psychometric properties. to psychometric properties.
Source: (Embretson & Reise 2000)

2.6. The Conceptual Framework

Thesis sections 2.2, 2.3 and 2.4 (Part-1, Part-2 and Part-3) of this chapter discussed the past and
current states of ICT-literacy literature. These sections also assist in outlining the conceptual
research framework of this study (Figure 2.6).

Investigating ICT-literacy assessment tool:


Page 36 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-2: Conceptual Research Framework

Trainee teachers’
ICT-literacy assessment Part-4
(Final instrument testing process)

Task-based assessment (TBA)


tool

Skills Part-3

Knowledge

ICT-literacy assessment tool + Malaysian Smart School Standard Part-2


Existing research + Standards for ICT-literacy Part-1
Figure 2.6. Conceptual research framework

The foundation for this new TBA ICT-literacy instrument derives from:
1. findings from existing research in ICT-literacy;
2. ICT-literacy standards from other countries and organisations;
3. current ICT-literacy assessment tools; and
4. the MSS standards.

Drawing on Bloom’s taxonomy of the cognitive domain; Krathwohl and Anderson et al. revised
taxonomy of the cognitive domain; and Gagnes’ conditions of learning, the tasks in the new
TBA tool will test the participant’s ICT skills and knowledge performance. Instead of telling the
researcher whether they have the appropriate ICT skills and knowledge, the participants have to
demonstrate the ICT skills and knowledge that they have acquired, through a series of ICT-
based tasks. This new instrument identifies areas in which a trainee teacher’s knowledge of ICT
is weak; using the results of this test, a university can further enhance the individual trainee
teacher’s level of ICT-literacy.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 37
Chapter-2: Conceptual Research Framework

2.7. Chapter-2 Summary

This chapter described the thesis topic. Overviews and justifications for each area of study were
presented. The components in the higher education ICT proficiency model were chosen as the
main structure upon which the TBA instrument development is based. Justifications for the
approach to the TBA instrument development and the selected theory for instrument
development and testing were also explained. The next chapter will further discuss each of the
three parts (see Figure 2.6) of the conceptual framework in more detail.

Investigating ICT-literacy assessment tool:


Page 38 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

3
Review of the Literature
Phase-1: Preliminary review process

3.1. Overview

The previous chapter discussed and justified the research design and revealed unexplored
avenues of research in the ICT literature. Having outlined the conceptual framework for this
thesis, this chapter reviews the connection between ICT-literacy and the building of a
knowledge society in Malaysia. It is also important to understand how schools have become the
new learning ground for this knowledge society.

This chapter further elaborates on learning theories, specifically that of cognitivism. Findings
from other studies on ICT-literacy assessment; standards developed for ICT-literacy; and ICT-
literacy in Malaysia are explored. These findings form the ICT-literacy indicators identified for
this study. The third part of this chapter examines the concept of task-based assessments and the
justifications for using them in this study.

This chapter is divided into the following sections:


• Part-1: Existing research and standards for ICT-literacy;
• Part-2: ICT-literacy assessment and the MSS;
• Part-3: Self-efficacy versus task-based assessment; and
• Chapter-3 Summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

3.2. Part-1: Existing Research and Standards for ICT-literacy

Trainee teachers’ ICT-


literacy assessment Part-4
(Final instrument testing)

Task-based assessment (TBA) tool

Skills Part-3

Knowledge

ICT-literacy assessment tool + Malaysian Smart School Standard Part-2

Existing Research + Standards for ICT-literacy Part-1

Figure 3.1. Part-1 of the conceptual research framework

This sub-section of the chapter concentrated on Part-1 of the conceptual research framework
(Figure 3.1).

ICT-literacy has been identified as having two elements: 1) technical and cognitive ability for
using ICT tools effectively; and 2) the capability of using those tools to function in a knowledge
society effectively (International ICT Literacy Panel 2002; Markauskaite 2007). A knowledge
society is formed when a considerable effort is invested in producing and sharing new
knowledge in a society (Anderson 2008). In our digital age young people in particular need to
be highly literate in ICT for life-long learning. Being ICT literate means being able to choose
responsibly and use ICT ethically to support critical and creative thinking about information and
communication as citizens of a knowledge society (Bradley 2006). For the purpose of this
research, ICT includes: computers and their peripherals (printers, scanners, fax machines, etc.);
mobile devices (phones, iPads); computer software; online learning systems; multimedia
applications; and the Internet. The continual and rapid advances in ICT tools development have
fundamentally changed the way we communicate, interact and learn (Northern Territory
Government 2009).

The rationale for being ICT literate in such a technological knowledge society and the
correlation between ICT-literacy and schools is discussed in the next two sub-sections 3.2.1 and

Investigating ICT-literacy assessment tool:


Page 40 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

3.2.2. Sub-section 3.2.3 explores the standards available for ICT-literacy, while sub-
section 3.2.4 examines the cognitive learning theory.

3.2.1. ICT-literacy and the knowledge society

The world of economics is changing. Instead of marking one’s assets by ownership of land or
having financial capital, the global economy is characterised by one’s knowledge and one’s
ability to access and use relevant information. The knowledge society is primarily referred to as
an economic system, where ideas or knowledge function as commodities. Drucker (1999, p. 135)
describes the context as one of the 21st-century challenges that we must anticipate:

The most valuable assets of a 20th-century company were its production


equipment. The most valuable asset of a 21st-century institution, whether business
or non-business, will be its knowledge workers and their productivity.

Knowledge has become a key national resource and knowledge workers are the dominant group
in its workforce. Drucker (2001) proposed three main characteristics of a knowledge society,
that include:
1. borderlessness: because information travels effortlessly;
2. upward mobility: available to everyone through easily acquired formal and
informal education; and
3. potential for failure as well as success: anyone can acquire the means of
production, but not everyone can win.

ICT plays an important role as an enabler that affords ease and swift information travel.
Borderlessness travel allows such digitally stored information to no longer be limited between
four walls. Knowledge can now be easily acquired as increasing numbers of people access
digital information, which helps in applying that knowledge to assist them in functioning well in
society (Drucker 1994). However, easy access to information also means that the potential for
failure can be as high as the potential for success. Since that same information and skill can be
accessed by everyone, the difference will be in their individual ability to develop deep cognitive
learning, creativity, and ingenuity.

More frequently, the recognition of work is based on an individual’s performance and ability to
effectively use information to solve important problems within a globally competitive economy
(Leu, Kinzer, Coiro & Cammack 2004). This social context prompts many of the changes to the
perceptions that are held about ICT-literacy (Bradley 2006).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 41
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Consequently, making effective use of the Internet is a necessary skill development component
for the literacy curriculum. Traditional definitions of literacy are no longer sufficient if we are to
provide the younger generations with the knowledge rich future they deserve. In this so-called
‘information age’, it is essential to prepare the younger generations for enhanced levels of
digital information literacy that includes ICT-literacy, because this skill development and
knowledge acquisition are central to the usability of the digital information received and the
application of one’s newly acquired knowledge.

In these ‘information age’ organisations, workload is no longer distributed in a top-bottom


approach (through line managers to their workers); instead, workload is spread horizontally. As
a result, each organisational unit works in teams. Furthermore, these teams within a lower
organisational level are empowered to make (collaborative) decisions related to their
functioning. These work teams are expected to be able to: identify problems; locate useful
(digital) information in order to solve that problem; critically evaluate the information found;
synthesise the gathered information to solve problems; and communicate the solution (Leu,
Kinzer, Coiro & Cammack 2004).

It is proposed here that schools’ curricula needs to be updated with the same type of demand for
the ‘information age’ workload. Schools therefore need to prepare their students, ensuring that
they acquire the appropriate problem recognition and solution-finding skills and an ability to
work in a team. The students must also know how, when and where to locate useful information
from the Internet and online databases. They have to acquire effective browser/search engine
strategy skills. Students must learn how to discriminate between accurate/non-accurate and
biased/non-biased information. This type of critical perception skill is important because
‘anything and everything’ can now be published online. Having evaluated the screen-based
information, the students must also be able to synthesise the informational concepts in order to
find solutions to solve problems. Schools need to pay extra attention to ‘information synthesis’
skill acquisition in their curricula (Leu, Kinzer, Coiro & Cammack 2004). Information synthesis
is the process of discovering and integrating separate pieces of digital information to solve a
problem. Developing this type of procedural skill requires lots of practice (Goldschmidt 1986).
When all the pieces of the problem fit together, students will need to acquire effective
collaboration and communication skills. Since students are expected to work in teams, they need
to develop effective communication skills that support them in keeping others informed of any
changes or group-related findings. Moreover, it is proposed here that students should develop the
capability to effectively use computer-supported collaborative tools (Kotlarsky & Oshri 2005).

Investigating ICT-literacy assessment tool:


Page 42 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

The European Union (EU) has proposed a framework of eight key competencies for a
knowledge-based society. Consequently, these competencies are deemed to be important in
order for individuals to live in a knowledge society successfully. These key competencies
include:
1. to communicate in the mother tongue;
2. to communicate in foreign languages;
3. mathematical competence and basic competencies in science and technology;
4. digital competence;
5. learning to learn;
6. social and civic competencies;
7. sense of initiative and entrepreneurship; and
8. cultural awareness and expression.

Competency in the basic knowledge and skill development that includes: language; literacy;
numeracy; and ICT tools, is deemed to be an essential foundation skill required for successful
learning, and learning to learn. Social and civic competencies cover all forms of behaviour that
equip individuals to effectively participate in the knowledge society and resolve potential
conflicts. Having this type of social and political awareness enables them to participate in civic
life (Bradley 2006). The sense of initiative and entrepreneurship refers to an individual’s ability
to turn ideas into action, including: creativity; innovation and risk-taking; as well as the ability to
plan and manage projects in order to achieve specified objectives. Apart from this, appreciation
of the creative expression of ideas and emotions is also an essential acquired skill, in order to
create a well-balanced individual in a knowledge society (European Commission 2007).

Each of the key competencies is described in terms of the knowledge, skills and attitudes
that are appropriate for each competency; one of these key competencies is digital competence
as described in Table 3.4 below.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 43
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Table 3.1. Digital competence in a knowledge society

Source: (European Commission 2007)

Many other organisations have written about similar competency-based frameworks and
position papers that define and promote reforms that enable the education sector to create what
are known as the 21st-century skills (Anderson 2008), including: the Partnership for 21st
Century Skills, North Central Regional Educational Laboratory (NCREL) (Lemke 2002);
Edutopia, which is based on cooperative-based learning (Pearlman 2006); and the Australian
Department of Education, Science and Training (DEST) (2000). The key themes of the 21st
Century Skills’ report are summarised in Table 3.2 below.

Investigating ICT-literacy assessment tool:


Page 44 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Table 3.2. Key themes in the 21st Century Skills’ report

Partnership for
Theme 21st Century Edutopia NCREL Australian DEST
Skills
Communication * * * *
Creativity * * * -
Collaboration * * * *
Critical thinking * * * *
ICT-literacy * * * *
Information and
* - * -
media literacy
High productivity * - * -
Life-long learning * - - *
Life skills * * * *

Each report emphasises different themes. The Partnership for 21st Century Skills stresses
critical thinking and life skills, while the Edutopia report emphasises collaboration, the NCREL
report puts heavy weight on high student productivity, and the Australian DEST report
emphasises life skills, which it calls enterprise skills. In general, these reports reveal
considerable consensus and consistency. All four reports agree that: communication;
collaboration; critical thinking; ICT-literacy; and life skills are the important skills for 21st-
century citizens.

Based on these skill development frameworks, it may be concluded that the expected digital
skills for the knowledge society involve:
• understanding the main computer applications;
• ability to search, collect and evaluate electronic information;
• ability to use appropriate aids to produce, present or understand complex information;
• ability to access and search a website, and use internet-based services;
• ability to use ICT in critical thinking, creativity and innovation in different contexts;
• communication;
• collaboration;
• information and media literacy;
• high productivity;
• life-long learning; and
• life skills.

This thesis incorporates these themes into the TBA ICT-literacy instrument.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 45
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

3.2.2. ICT-literacy and schools

Governments around the world are aware of the increasing importance of ICT tools and the
power they create for enhancing economic competition. As such, public policies have been
revised to include ICT-literacy achievement in order to better prepare their citizens for the
challenges that lie ahead. Similarly, new ICT initiatives have been introduced to schools in an
effort to prepare the younger generations for the future (Leu, Kinzer, Coiro & Cammack 2004).

The Australian federal government committed over AU$2.4 billion to support the effective
integration of ICT tools in Australian schools in line with its broader education initiatives
(DEEWR 2011). The ‘National Secondary School Computer Fund’ had been introduced to
assist schools and their educational system by providing new computers and other ICT
equipment (such as scanners, printers, etc.) for students in Years 9 to 12. Education authorities
across the country have installed more than 911,000 computers, exceeding the original target of
786,000 computers by the beginning of the 2012 school year (DEEWR 2011). Through this
taxation-based funding arrangement, the Australian federal government is providing funding of
AU$1000 per computer and up to AU$1500 for the installation and maintenance of such
devices (DEEWR 2011). At the same time, teachers in Australia were provided ample and
ongoing support in order to ensure the Australian Digital Education Revolution (DER) is a
success. Moreover, in recognition that teachers are vital to successful student learning, in 2010
the Australian Federal Minister for School Education, Early Childhood and Youth announced
that four projects worth more than $16 million would receive Australian federal government
funding under the ICT Innovation Fund (ICTIF). The ICTIF supports the implementation of the
Australian DER and the professional development of teachers for their use of ICT tools.

The USA has a long history of state and local control over educational policies. Prior to 2002
most of the public policy initiatives for raising literacy achievement took place at the state level.
Many States established standards or benchmarks, which were usually in conjunction with
new state-wide assessment instruments. Many States also initiated policies to infuse more
ICT-based activity in the classroom (Leu, Kinzer, Coiro & Cammack 2004). Meanwhile, at
the Federal level, other important initiatives have focused on literacy issues (which
include reading, mathematical skills and ICT-literacy). These initiatives produced legislation
such as: the reading excellence act, the appointment of a national reading panel and the
development of standards for the English language arts. Each of these initiatives was designed to
improve reading achievement, and was marked by substantial controversy. The controversy
continued with the passing of the no child left behind (NCLB) Act in 2002.

Investigating ICT-literacy assessment tool:


Page 46 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

The NCLB Act endorses an extensive list of public policy initiatives, many of which are also
designed to increase student achievement in reading. These provisions include several
requirements: that all students are proficient in reading and mathematics within 12 years; that
assessment in both reading and mathematics be conducted annually for all students in grades
three to eight and be conducted at least once in grades 10 to 12; that reading programs be funded
only if they are based on scientifically based reading research; and that all teachers be highly
qualified, with state certification (Leu, Kinzer, Coiro & Cammack 2004). It expanded the
Federal role in education and took particular aim at improving educational achievement of
disadvantaged students. These measures were designed to support student achievement and to
hold States and schools more accountable for student progress. They represented significant
changes to the education landscape (No Child Left Behind 2004).

Similar to other nations, this major American policy initiative in reading also contains a
technology component. Title II, Section D, of the NCLB Act is devoted to technology. The
primary goal of this section is to improve student academic achievement through the use of
technology in elementary and secondary schools (No Child Left Behind 2004; ED.gov 2004).
This section was divided into two subparts:
1. to assist every student in crossing the digital divide by ensuring that every
student is technologically literate by the time the student finishes the eighth
grade, regardless of the student’s race, ethnicity, gender, family income,
geographic location, or disability; and
2. to encourage the effective integration of technology resources and systems with
teacher training and curriculum development to establish research-based
instructional methods that can be widely implemented as best practices by State
educational agencies and local educational agencies.

In order to promote the goals of this NCLB Act section (Title II, Section D), the American
Federal Government provided US$1 billion for the fiscal year 2002 and such sums as may be
necessary for the five succeeding fiscal years, most of which will provide for State and local
technology grants. This fund was to be allocated so that not less than 98% is made available to
carry out subpart 1, and not more than 2% is made available to carry out subpart 2 (ED.gov
2004).

Government initiatives and grants aside, studies on ICT-literacy have been conducted from
several different angles, with each focusing on different dimensions and definitions. McNaught
(2006, p. 33) describes ICT-literacy as the "ability to access, evaluate, manage and

Investigating ICT-literacy assessment tool: Page 47


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

communicate information". In her paper, Professor McNaught emphasised the need for
educational institutions to produce graduates with fundamental capabilities, who can effectively
function in a currently complex and ever-changing world. To make this possible, she suggested
ICT-literacy as the key to producing a well-designed curriculum for universities. This is because
an online environment should facilitate access and retrieval of digital information, as well as
afford communication with educators and/or other learners.

In schools, the role of ICT tools may be applied in their pedagogical activities, cultural, social
and professional roles, and administrative activities (Hepp, Hinostroza, Laval & Rehbein 2004).
As a pedagogical aid, ICT tools assist in making a classroom experience much more enjoyable,
where students can actively participate in the learning activity. Some examples of the ICT
pedagogical tools that are currently being used include: specially developed interactive software
packages; online collaboration with other students from other countries; and relevant web pages.
However, it is important to remember that a teacher’s guidance is an important factor for these
activities to be successful. Teachers are imperative for organising the effective and enjoyable
learning space, to guide/facilitate the students in achieving their learning objectives (Hepp,
Hinostroza, Laval & Rehbein 2004).

ICT mobile communication tools such as: the tablet or personal computer (PC), smartphone,
and personal digital computer (PDA) have been useful teaching aids. Additionally, ICT tools
also help in the schools’ cultural and social activities. Through online collaborations, aside from
doing educational projects, ICT tools also facilitate students’ understanding of other countries
and cultures. Projects such as SchoolNet and Worldlinks are responsible for delivering high
quality educational resources and online training for teachers (World Links 2010; Pearson
Education 2011). Employing these digital resources as training tools assists the teachers to
amplify the quality of class discussions, expands the students’ horizons and stimulates social
interactions (Hepp, Hinostroza, Laval & Rehbein 2004; Punie 2007).

ICT tools also enhance school’s administrative activities (Table 3.3). On all levels of school
administration (classroom, school and policy makers), ICT tools effectively support the
integration and flow of information such as: student information; curriculum; budgets and
school activities. Moreover, the social context of ICT tools affords a more open communication
with parents and the community (Hepp, Hinostroza, Laval & Rehbein 2004).

Investigating ICT-literacy assessment tool:


Page 48 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Table 3.3. Approaches to ICT development in schools

Emerging Applying Integrating Transforming


• Dominated by • Driven by ICT • Driven by subject • Leadership
individual interest specialists specialists • Acceptance by entire learning
Vision

• Limited • Discrete areas community


• Pragmatic • Network-centred community

• Teacher-centred • Factual • Learner-centred learning • Critical thinking and


• Didactic knowledge-based • Collaborative informed decision-making
pedagogy
Learning

learning • Whole learner, multisensory


• Teacher-centred preferred learning styles
• Didactic • Collaborative
• ICT a separate • Experiential
subject
• Non-existent • Limited • Individual subject plans • ICT is integral to overall
• Accidental • ICT resource-led include ICT school development plan
Development plans and

• Restrictive • Centralised • Permissive policies • All students


policies policies • Broadly-based funding, • All teachers
including teacher training
policies

• No planned • Hardware and • Inclusive policies


funding software funding • All aspects of ICT funding
• Automating integral to overall school
existing practices budget
• Integral professional
development

• Standalone • Computer lab or • Computer lab and/or • Whole school learning and
workstations for individual classroom computers ICT infrastructure and access
administration classrooms for • Networked classrooms, to technology resources and a
• Individual ICT-specific intranet and Internet wide range of current devices
classrooms outcomes • ICT and learning resource- • Emphasis on a diverse set of
computers and • Computers, rich learning centres learning environments
printers printers and • Range of devices • All of the above and web-
• Word processing, limited based learning spaces
Facilities and resources

including: digital cameras,


spreadsheets, peripherals scanners, video and audio • Brainstorming
databases, • Word processing, recorders, graphical • Conferencing and
presentation spreadsheets, calculators, portable collaboration
• School databases, computers, remote sensing • Distance education
administration presentation devices, video- • Web courseware
software • ICT software conferencing • Student self-management
• Games • Internet access • Word processing, software
spreadsheets, databases,
presentation software
• Range of subject-
orientated content
• Multimedia authoring,
video/audio production
• Range of subject-specific
software
• ICT-literacy • Applying • Integration with non-ICT • Virtual and real-time contexts,
Understanding of the

• Awareness of software within content new world modelling


software discrete subjects • Integrated learning • ICT is accepted as a
curriculum

• Responsibility of • Use of artificial systems pedagogical agent itself


individual and isolated • Authentic contexts • The curriculum is delivered by
teachers contexts • Problem-solving project the web as well as by staff
methodology
• Resources-based learning

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 49
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Emerging Applying Integrating Transforming


• Individual • ICT applications • Subject-specific • Focus on learning and
development for

interest training • Professional skills management of learning


Professional

school staff

• Unplanned • Integrating subject areas • Self-managed, personal vision


• Personal ICT using ICT and plan, school-supported
skills • Evolving • Innovative and creative
• Integrated learning
community – students/
teachers co-learners
• Discrete • Seeking donations • Subject-based learning • Broad-based learning,
donations and grants community providing community actively involved,
• Problem-driven • Parental/ discrete, occasional parents and families, business,
• Accidental community assistance, by request industry, religious
Community

involvement in • Global and local organisations, universities,


ICT networked communities vocational schools, voluntary
organisations
• Global and local, real and
virtual
• School is a learning resource
for the community –
physically and virtually
• Equipment-based • Skills-based • Integrated • Continuous
• Budget-orientated • Teacher-centred • Portfolios • Holistic – the whole learner
• Discrete subjects • Subject-focused • Subject-oriented • Peer-mediated
• Didactic • Reporting levels • Learner-centred • Learner-centred
Assessment

• Paper and pencil • Moderated within • Student responsibility • Learning community


• Controlling subject areas • Multiple media choices to involvement
• Closed tasks demonstrate attainment • Open-ended
• Responsibility of • Moderated across subject • Project-based
individual teacher areas
• Social and ethical as well
as technical
Source: (Buettner et al. 2000)

The schemes that have been implemented in a school environment to promote a better
acceptance of digital literacy have been done through:
• government funds, grants and special initiatives; and
• changes in curriculum to include ICT-literacy in pedagogical activities, schools’
cultural, social and professional roles, and schools’ administrative activities.

These schemes have four different methods of implementation that involve: emerging;
applying; integrating; or transforming. Each of these methods focuses on different parts or
people for ICT development in schools. The different implementation methods have influenced
the way ICT is being integrated in schools.

3.2.3. ICT-literacy standards

In 2004 the Australian and New Zealand Institute for Information Literacy (ANZIIL) developed
a national standard or framework that provides the principles, standards and practices to support
information literacy education in all educational sectors. Known as the Australian and New
Zealand Information Literacy Framework, it was derived from the Association of College and

Investigating ICT-literacy assessment tool:


Page 50 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Research Libraries (ACRL) information literacy standards (see Appendix A), and was adapted
to incorporate local and international information literacy needs. ACRL is a division of the
American Library Association. It is a professional association of academic librarians, dedicated
to enhance the ability of academic library and information management professionals to serve
the information needs of the higher education community and to improve learning, teaching, and
research. ACRL is the source that the higher education community looks to for standards and
guidelines on academic libraries (ACRL 2009). ACRL publishes standards and guidelines to
help libraries and academic institutions. These standards, guidelines, and model statements are
reviewed and updated by ARCL members on a regular basis.

In defining ‘information literate’, ANZIIL (2008) describes them as people who:

• are engaged in independent learning through constructing new meaning,


understanding and knowledge;
• derive satisfaction and personal fulfilment from using information wisely;
• individually and collectively search for and use information for decision-making and
problem-solving in order to address personal, professional and societal issues; and
• demonstrate social responsibility through a commitment to lifelong learning and
community participation.

The ANZIIL framework is based on four principles involving six core standards. These ANZIIL
(2008) standards identify that an information literate person:
• recognises the need for information and determines the nature and extent of the
information that is needed;
• finds required information effectively and efficiently;
• critically evaluates information and the information-seeking process;
• manages information collected or generated;
• applies prior and new information to construct new concepts or create new
understandings; and
• uses information with understanding and acknowledges cultural, ethical,
economic, legal, and social issues surrounding the use of information.

Apart from that, the American Library Association (ALA) and Association for Educational
Communications and Technology (AECT) have also formulated nine information literacy
standards for student learning (Figure 3.2). These standards reflect three areas with three
standards in each area. This standard reflects the Gagne (1985) theory of learning that describes
internal (independent learning) and external learning (social responsibility).
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 51
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Social Information Independent


responsibility literacy learning

Information literacy
Standard 1: The person accesses information efficiently and effectively.
Standard 2: The person evaluates information critically and competently.
Standard 3: The person uses information accurately and creatively.

Independent learning
Standard 4: The person pursues information related to personal interests.
Standard 5: The person appreciates literature and other creative expressions of information.
Standard 6: The person strives for excellence in information seeking and knowledge generation.

Social responsibility
Standard 7: The person recognizes the importance of information to a just society.
Standard 8: The person practices ethical behaviour in regard to information and information technology.
Standard 9: The person participates effectively in groups to pursue and generate information.
Source: (McNaught 2006)

Figure 3.2. The nine information literacy standards by ALA & AECT

These standards provide an excellent foundation for developing a task-based ICT-literacy


assessment tool. It represents a benchmark of what a person is expected to achieve in order for
them to be recognised as ICT literate.

Investigating ICT-literacy assessment tool:


Page 52 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Table 3.4. Similarity of ICT components for ICT-literacy

Components ARCL (US) ANZIIL (AUS) ISTE (US)

Plan/Define Able to determine the Recognises the need for Using digital tools to
nature and extent of the information and determines identify and represent any
information needed the nature and extent of the information need
information needed
Access Accesses required Finds required information Collecting and/or retrieving
information effectively effectively and efficiently information in digital
and efficiently environments
Integrate Incorporates selected Interpreting and
information into his or representing information,
her knowledge base and such as by using digital
value system tools to synthesise,
summarise, compare, and
contrast information from
multiple sources
Evaluate Evaluates information Critically evaluates Judging the degree to which
and its sources critically information and the digital information satisfies
information-seeking process the needs of an information
problem, including
determining authority, bias,
and timeliness of materials
Manage Uses information Manages information Using digital tools to apply
effectively to collected or generated an existing organisational
accomplish a specific or classification scheme for
purpose information
Create Applies prior and new Adapting, applying,
information to construct new designing, or constructing
concepts or create new information in digital
understandings environments
Communicate/Collaborate Demonstrates social Disseminating information
responsibility through a relevant to a particular
commitment to life-long audience in an effective
learning and community digital format
participation
Reflect Understands many of Uses information with
the economic, legal, and understanding and
social issues acknowledges cultural,
surrounding the use of ethical, economic, legal, and
information and social issues surrounding the
accesses and uses use of information
information ethically
and legally

3.2.4. ICT-literacy and the learning theories

Those who seek to study and improve education through methods of research are inevitably
concerned with the human activity of learning. Learning by definition is 'the act, process or
experience of gaining knowledge or skill’ or ‘behavioural modification especially through
experience or conditioning’ (Dictionary.com 2009). Building on this theme, learning theories
attempt to describe how people (and animals) learn, thereby helping us understand the complex
process of learning. Learning theories have two chief values according to Hill (2002).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 53
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

One is in providing us with vocabulary and a conceptual framework for interpreting the
examples of observable learning. The other is in suggesting where to look for solutions to
practical problems. These theories do not give us solutions, yet they do direct our attention to
variables that may assist in finding solutions. For example: for teachers, these theories could
provide some guidance in making decisions about instructional strategies. Having students from
different social, economic and cultural backgrounds may be challenging for teachers (Darling-
Hammond et al. 2001). Consequently, to enable students to achieve their goals, the teachers
need to acknowledge these differences and build upon the students’ prior knowledge and
cultures. The learning theories provide the means of addressing this situation.

There are three main categories or philosophical paradigms under which learning theories fall:
behaviourism, cognitivism, and constructivism. Behaviourism focuses on the objectively
observable aspects of learning (Merrill, Li & Jones 1990; Reigeluth & Keller 2009). Cognitive
theories look beyond behaviour to explain mental-based learning. Cognitive science began
shifting from behaviouristic practices (Reigeluth 1983; Gagne 1985), which placed an emphasis
on external behaviour, to a concern with the internal mental processes of the mind and how they
could be utilised in promoting effective learning (Mergel 1998). The constructivist views
learning as a process in which the learner actively constructs or builds new ideas or concepts
(Jonassen 1991; Mayer 2009; Reigeluth & Keller 2009).

In the 1960s, based on behaviourism learning theory, instructional design was introduced as a
way of developing instructional programs to identify the students’ levels of performance, based
on predetermined behaviourally defined objectives. The instructional design paradigm was seen
as an attempt to develop a single, ideal instructional theory that would specify teacher
characteristics, classification and evaluation procedures, and means to modify the learning
objectives being tested (Merrill, Tennyson & Posey 1992; Tennyson 2012). Important to this
instructional paradigm are the individual differences in what each student brings to the learning
task. Instructional design therefore concentrates on the methods of task analysis and the
development of behavioural objectives for learning, including to: 1) identify small, incremental
tasks or sub-skills which students need to acquire for successful completion of the instruction; 2)
prepare specific behavioural objectives that lead to the acquisition of those sub-skills; and 3)
sequence sub-skill acquisition to efficiently lead to successful student outcomes (Tennyson
2012).

It was in the late 1970s that cognitive science began to have its influence on instructional
design. The definition of instructional design at this point shifted to considerations of learning

Investigating ICT-literacy assessment tool:


Page 54 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

theory and to the development of models linking those theories to the design of instruction
(Tennyson 2012). The result was an increase of instructional systems design models and
instructional design theories that cover a wide range of perspectives. Instructional design
researchers in the 1970s tried to establish a more complete picture of the conditions of learning
(Gagne 1985) that corresponded closely with a student’s individual cognitive growth (Tennyson
2012). Yet it appears that Benjamin Bloom remains as a principal cognitivism theorist. In 1956
he suggested the classification of educational objectives by organising them according to their
cognitive complexity and introduced his well-known framework called Bloom’s Taxonomy
(Atherton 2005).

Robert Gagne also suggested the need to identify the process of learning domains (Gagne 1985).
Gagne identified three reasons for this need: 1) to identify the different instructional treatment
required; 2) to identify similarity of instructional procedures; and 3) to identify the different
techniques of instructional outcome assessment. According to Gagne, there are numerous
educational content areas (for example: science, language, and mathematics) that exercise
different methods of instruction. As such, each part of the educational content areas is to be
distinguished and handled differently.

Gagne also recognised that similar instructional procedures can be observed through different
content areas. For example: clarifying definitions is one of the common questions asked in most
educational content areas, and is equally applicable either in mathematics, science or languages
(Gagne 2000). Thus it is necessary to correctly relate instructional procedures from one
educational content area to those of another. Subsequently, identifying domains of the process
of learning is important because each learning outcome requires different assessment
techniques. Consequently, we simply cannot use just one general assessment method to measure
whether or not there has been any learning.

Later, Krathwohl (2002) and Anderson et al. (2001) revised the original Bloom’s Taxonomy.
Instead of the cognitive domain being one dimension, they agreed with Gagne in that
multidimensionality of the cognitive domain differs from the other two taxonomic categories
(affective domain and psychomotor domain). This anomaly was eliminated in the revised
taxonomy by allowing the two aspects of the cognitive domain; a noun and verb forming
separate dimensions. The noun provides the basis for the knowledge dimension and the verb
forms the basis for the cognitive process dimension.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 55
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

The new knowledge dimension contains four instead of three main categories. Three of them
include the substance of the subcategories of knowledge in the original Bloom’s Taxonomy. A
fourth and new category, meta-cognitive knowledge, provides a distinction that was not widely
recognised at the time the original taxonomy was developed. Meta-cognitive knowledge
involves knowledge about cognition in general, as well as acknowledging awareness of and
knowledge about one’s own cognition (Krathwohl 2002). It is of increasing significance as
researchers continue to demonstrate the importance of students being made aware of their meta-
cognitive activity, and then using this knowledge to adapt appropriately in the ways in which
they think and operate (Krathwohl 2002). Thus the four categories suggested by Krathwohl
(2002) are:
1. factual knowledge: the basic elements that students must know to be
acquainted with a discipline or solve a problem in it;
2. conceptual knowledge: the interrelationships among the basic elements
within a larger structure that enable them to function together;
3. procedural knowledge: how to do something; methods of inquiry, and criteria
for using skills, algorithms, techniques, and methods; and
4. meta-cognitive knowledge: knowledge of cognition in general as well as
awareness and knowledge of one’s own cognition.

Many studies of ICT adapt the instructional/learning domains identified by Bloom and Gagne.
In Ainley, Banks, and Fleming (2002), the revised version of Bloom’s taxonomy of the
cognitive domain by Krathwohl and Anderson et al. was used as the framework for a study on
the use of ICT in Australian schools. Ways of using ICT tools in their teaching and learning
processes in five Australian schools were observed. Those activities were later coded using the
taxonomy and then compared. Ainley, Banks, and Fleming found that many of the schools focus
on competencies in using ICT tools and the importance of developing ICT-literacy skills. These
schools also recognise the importance of developing students’ knowledge and cognitive
processing capabilities through ICT tools. The study acknowledges teacher capacity in
implementing appropriate activities that actively engage students is also an important feature for
the success of blending ICT tools in teaching and learning.

Similarly, a study by Clarkson and Oliver (2002) employed the revised version of Bloom’s
taxonomy of the cognitive domain to develop an instrument that identifies levels of ICT uptake.
The three domains for stages of teacher experiences and dispositions with ICT were closely
matched to Bloom’s three domains of learning (affective, cognitive, and psychomotor). Boud’s
study (in Clarkson & Oliver 2002), and its four stages of learning new material were also

Investigating ICT-literacy assessment tool:


Page 56 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

adapted in their instrument that is known as the autonomy, dependence, and learning model
(ADL model) (Table 3.5).

This Clarkson and Oliver model maps teacher feelings, understandings, and behaviours toward
ICT uptake in a 4 x 3 matrix. The assessment instrument was administered to interested teachers
in two Western Australia metropolitan elementary schools. Findings suggest consistency
between what researchers predicted and how teachers perceived their abilities in the matrix
cells.
Table 3.5. Four stages of ICT uptake in the ADL model

Stage
Dependence Counter- Independence Interdependence
Domain Dependence
Feelings
Understandings
Behaviours
Source: (Clarkson & Oliver 2002)

Gagne’s five domains of learning were employed by McKay (2000) in her study of the
interactive effects of cognitive preference and instructional strategies on performance outcomes.
McKay utilised Gagne’s domains of learning in a matrix to support the development of her
testing instrumentation. The matrix also allowed McKay to identify the knowledge performance
bands of her research participants. Since her study involved the ‘cognitive knowledge’, only the
first three of Gagne’s domains of learning were applied in her matrix: verbal information skill,
intellectual skill, and cognitive strategy.

This thesis adapted both Gagne’s and Anderson et al.’s theories as the cognitive instructional
learning structure used in the development of the TBA instrument. As in McKay’s study, the
first three of Gagne’s domains of learning were used. Moreover, the meta-cognitive skill
suggested by Krathwohl and Anderson et al. was also included as the evaluated instructional
objectives in the test instrument development matrix (Table 3.6) (see Chapter-4 sub-section
4.6.2 for the full description of this matrix).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 57
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Table 3.6. Test instrument development matrix

Instructional objectives: ICT-literacy


Declarative Procedural Meta-cognitive

Band-A Band-B Band-C Band-D Band-E Band-F


Verbal Intellectual Intellectual Cognitive Cognitive Meta-cognitive
information skill skill strategy strategy knowledge
skill
(Anderson et al.
(Gagne 1985) (Gagne 1985) (Gagne 1985) (Gagne 1985) (Gagne 1985) 2001 and
Krathwohl 2002)
ICT-literacy
indicators:
(from literature)


Adapted from: (McKay 2000)

3.3. Part-2: ICT-literacy Assessment and the Malaysian Smart School

Trainee teachers’ ICT-literacy Part-4


Assessment
(Final instrument testing)

Task-based assessment (TBA) tool

Skills Part-3

Knowledge

ICT-literacy Assessment Tool + Malaysian Smart School Standard Part-2


Existing Research + Standards for ICT-literacy Part-1

Figure 3.3. Part-2 of the research conceptual framework

This sub-section of the chapter concentrated on Part-2 of the conceptual research framework
(Figure 3.3).

Many studies that developed an ICT-literacy assessment instrument applied the higher education
ICT proficiency model (see Figure 3.4) that had been developed by the ICT Literacy Panel and
the Educational Testing Service (ETS). Hignite, Margavio and Margavio (2009) developed an
ICT assessment instrument where the examinees were required to complete 15 ICT-based tasks
designed to evaluate the examinees cognitive and/or critical thinking skills. In turn, these tasks

Investigating ICT-literacy assessment tool:


Page 58 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

also captured the examinees’ ability to: define; access; evaluate; manage; integrate; create; and
communicate information. Calvani et al. (2008) adopted the ICT-literacy components from the
higher education ICT proficiency model (Figure 3.4) into their framework, known as the ‘digital
competence framework’.

Define

Access
Cognitive
Manage
ICT- Integrate Ethical
literacy
Evaluate
Technical
Create

Communicate

Source: (Williamson, Katz & Kirsch 2005)


Figure 3.4. The higher education ICT proficiency model

Based on this framework they developed an ICT assessment instrument known as the digital
competence assessment (DCA), which is separated into three sub-tests known as instant DCA,
situated DCA, and projective DCA.

3.3.1. Assessing ICT-literacy

In their report, the ICT Literacy Panel suggested a richer way of collecting ICT-literacy
capability data, through a series of computer-based simulative tasks, which integrate both the
cognitive and technical domains, as valuable information that will be lost if they are not
conducted in ‘real-world settings’ (International ICT Literacy Panel 2002). The Panel also
concluded that: 1) ICT-literacy must include both critical cognitive skills as well as the
application of the technical skills and knowledge; 2) the concept of the digital divide must
include the impact of limited reading, numeracy, and problem-solving skills; and 3) a
measurement instrument is critically needed to measure ICT-literacy that will assess the full
domain of knowledge and skills (International ICT Literacy Panel 2002).

Markauskaite, in her 2005 study, developed a model to evaluate ICT-literacy. She defines ICT-
literacy as: "a broad transferable set of cognitive, non-cognitive and metacognitive capacities as
well as other human attributes, related to the use of ICT in various spheres of a knowledge
society" (Markauskaite 2005b, p. 253). The model she developed is based on two products of
social-cognitive theory: self-efficacy theory and theory of planned behaviour (Markauskaite
2005a). It is divided into two constructs: firstly, one that measures general cognitive

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 59
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

capabilities; and secondly, another that measures technical capabilities, with a uniform structure
applied for both constructs. Later in 2007 she developed her model further by adapting the
higher education ICT proficiency model developed by the ICT Literacy Panel and integrated it
with several other information literacy and technological literacy models: 1) ANZIIL; 2)
International Society for Technology in Education (ISTE|NETS – standards for learning, leading
and teaching in the digital age); and 3) Eisenberg and Johnson’s Big-6 problem-solving
framework (in Markauskaite 2007). The Big-6 is a six-step model that can be used to help one
make decisions by using information (Eisenberg, Johnson & Berkowitz 2010). The six steps are
as follows: task definition; information-seeking strategies; location and access; use of
information; synthesis; and evaluation. The nine main areas of ICT-literacy in Markauskaite’s
study were based on the higher education ICT proficiency model and ANZIIL, and were
identified as the structure for the instrument (see Table 3.7).

Table 3.7. Constructs and structures

Structures Constructs
The main areas of ICT-literacy Technical capabilities General cognitive capabilities
1. Plan
2. Access
3. Manage
4. Integrate
5. Evaluate
6. Create
7. Communicate
8. Collaborate
9. Reflect
Source: (Markauskaite 2007)

In the Markauskaite study, the result reveals that trainee teachers need to improve their
confidence in their cognitive and technical ICT capabilities. This enhanced performance could
be carried out by integrating problem-based tasks into their ICT-related courses. This integrated
model of ICT-related capabilities may assist trainee teachers better understand ICT-literacy.

Several studies had been conducted on trainee teachers and in-service teachers (Graham & Glen
1997; Dawes 2000; Luke 2001; Knezek & Christensen 2002; Jamieson-Proctor, Burnett, Finger
& Watson 2006). Most of them concentrated on perceptions and attitudes, as applied to either
preparedness to integrate ICT as tools, or to teach ICT as a subject in their curricula.

Investigating ICT-literacy assessment tool:


Page 60 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

There are, however, contradictory findings from these studies in terms of trainee teachers and
in-service teachers. Albion (2003b, 2003a), in his interviews with trainee teachers in one Sydney
university found that compared to their predecessors, current trainee teachers are prepared and
are willing to use and integrate ICT to enhance their instructional strategies. Yet Markauskaite
(2007), who uses a different research approach, found that trainee teachers were identified as
being between quite confident and moderately confident with their basic and advanced technical
computer skills.

There is also a suggestion that trainee teachers’ reluctance in using ICT in their instructional
strategies may be the result of insufficient ICT pedagogical training in their teacher training
institutions (Cuckle & Clarke 2002). Cuckle and Clarke (2002) found that trainee teachers do
have good ICT skills for their personal academic use, yet when it comes to implementing these
skills in an instructional classroom environment, they were lost.

For in-service teachers, there seems to be many barriers that may inhibit their ICT usage in the
classroom. Dawes (2000) and Jamieson-Proctor et al. (2006) find that teachers’ confidence
levels were a very important factor. Teachers with low or no confidence will try to avoid using
ICT altogether. Technical support from the school, quantity of training received and quality of
this training significantly correlate with teachers’ competence and anxiety levels (Graham &
Glen 1997). Aside from that, resistance to change was a major issue, particularly from the more
experienced teachers (Jamieson-Proctor, Burnett, Finger & Watson 2006).

Ball and Levy (2008) tried to investigate whether computer self-efficacy (CSE), computer
anxiety (CA) and experience with the use of technology (EUT) contribute to educators’ intention
to use (IU) educational technology in their classrooms. The study was conducted at a small
private university in the USA. Through their survey instrument, the results show that computer
self-efficacy was the only significant predictor for intention to use.

Another dilemma with teachers integrating ICT into their work was the way in which ICT tools
were being used in schools. Most of the teachers prefer to use ICT to enhance their current
curriculum rather than transform the curriculum with the advancement of ICT (Luke 2001).
Instead of re-designing their current curriculum to make maximum use of the ICT capabilities,
teachers prefer to use ICT as a tool that could expedite their current way of teaching. Thus ICT
tools were mainly used as substitutes for: the typewriter, calculator, or audio-visual (AV)
equipment.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 61
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

The British Educational Communications and Technology Agency (BECTA) (June 2004) did an
academic review several years ago on teachers’ problems in taking up ICT, grouping these
barriers into two groups, known as external barriers and internal barriers (Table 3.8).

Table 3.8. Barriers to teachers uptaking ICT

External Barriers Internal Barriers


Lack of time Lack of confidence
Lack of access to resources (lack of hardware, Resistance to change and negative attitudes
inappropriate organisation, poor quality software) No perception of benefits
Lack of effective training
Technical problems

Source: (BECTA June 2004)

Research suggests that the issue of low acceptance of ICT can be managed if the internal
barriers were addressed. There is no point in providing all the equipment and training if the
teachers’ confidence and attitudes were not changed. This report (BECTA June 2004) also
suggests that other barriers can influence the confidence barrier. Teachers’ confidence (internal
barrier) can be affected by three external barriers: technical support; lack of effective training;
and lack of access to resources. Teachers with low confidence levels would have a high
expectation of technical problems occurring, therefore they avoid using ICT altogether.
Teachers with low confidence levels have a higher probability of choosing not to participate in
any optional training since many of them are self-conscious and do not want to embarrass
themselves in front of their colleagues. And, finally, there are teachers with low confidence
levels who may also avoid seeking access to ICT themselves (Figure 3.5).

Source: (BECTA June 2004)

Figure 3.5. Relationships between confidence barrier and other barriers

Investigating ICT-literacy assessment tool:


Page 62 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Demographical issues such as gender, experience and age were also part of the typical research
factors studied in relation to teachers’ confidence and attitudes. Graham and Glen (1997) found
that male teachers use ICT tools more than female teachers, and their anxiety levels were lower
than female teachers.

This disparity concurs with Jamieson-Proctor et al.’s (2006) findings, where male teachers
reported significantly higher confidence in using ICT for teaching and learning. Yet Havelka
(2003) found that it reveals no difference between genders. Markauskaite (2006) discovered that
when the impact of the background and ICT experience variables were controlled, gender failed
to explain general cognitive abilities, ICT technical abilities, sustainability of ICT capacities and
transferability of ICT. Instead, the most influential factor was the time spent on various
computer activities.

However, Karsten and Schmidt (2008) do not totally agree with Markauskaite’s finding. They
conducted a comparative study of students’ computer self-efficacy between students enrolled in
introduction to information systems courses in 1996 and 2006. They found that computer
experience and time spent on computers does not necessarily translate into better computer self-
efficacy. Their findings suggested that the students might have spent more time on computers to
communicate with each other (for example in: social networking; chatting; and emailing) rather
than doing task-related or problem-solving exercises as required by the course. Therefore, their
skills may be limited to those that enable them to communicate, which is mainly typing text or
numbers. They also believe that computer skills required in some classes may be narrow and
limited (for instance, in being limited to using Word and PowerPoint only). Karsten and
Schmidt found that when comparing students in 1996 and 2006, there were no significant
differences in their level of computer efficacy. In fact, when gender, class level, computer
experience, and frequency of use were controlled, computer self-efficacy for the 2006 students
was significantly lower than the 1996 students. Karsten and Schmidt (2008) proposed that
changes in computer self-efficacy depend on the type of information and experience the students
were subjected to, not on experience of use per se.

Aside from gender, experience and age do influence teachers’ level of confidence and attitude.
Research shows that generation-Y trainee teachers have more experience with ICT, therefore are
more confident and have a more positive attitude towards integrating ICT in teaching and
learning (Albion 2003a, 2003b).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 63
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

3.3.2. ICT-literacy in Malaysia

In Malaysia, ICT-literacy is actively promoted in schools through the Malaysian Ministry of


Education’s agencies. More than 50,000 teachers have either been through or are currently
participating in ICT courses. The Ministry has made it compulsory for all teacher trainees at the
teacher training colleges to be exposed to ICT-literacy, and the use of ICT in pedagogy (Chan
2002b).

However, looking at the current Malaysian school scenario, and in particular the MSS, Chan
(2002a) identified a serious gap between what is being understood and what is being practised
with regard to information literacy.

Most schools assume that information literacy is librarian-teacher oriented; therefore it is not
considered to be part of the curriculum. The information literacy competency standard for
higher education endorsed by the American Association of Higher Education (AAHE) and the
ACRL defines information literacy as:

an intellectual framework for understanding, finding, evaluating, and using


information – activities which may be accomplished in part by fluency with
information technology, in part by sound investigative methods, but most
important, through critical discernment and reasoning (Association of College and
Research Libraries 2000, p. 3).

The ACRL states that information literacy is the ability to: 1) determine the extent of
information needed; 2) access the needed information effectively and efficiently; 3) evaluate
information and its sources critically; 4) incorporate selected information into one’s knowledge
base; 5) use information effectively to accomplish a specific purpose; and 6) understand the
economic, legal and social issues surrounding the use of information and access, and use
information ethically and legally (The Association of College and Research Libraries 2000).

Some of the teachers appear to not fully understand the term information literacy and how it
could relate to their teaching practice. In her paper, Chan (2002a) suggested resource-based
learning as a tool to encourage information literacy, where it is believed that using this type of
technology tool could encourage interactive, collaborative and self-directed learning (McKay
2008).

A number of research studies have looked at various ICT topics in Malaysia, yet most of them
involved: perception and attitude towards the usage or integration of ICT in teaching and

Investigating ICT-literacy assessment tool:


Page 64 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

learning (Abang Ahmad, Hong & Aliza 2001; Noor Azizi & Basariah 2005); technical abilities
and differences of ICT competencies between the genders (Wong et al. 2005; Megat Aman
Zahiri, Baharuddin & Jamalludin 2007); differences regarding ICT competencies between
different courses of study and academic achievement (Megat Aman Zahiri, Baharuddin &
Jamalludin 2007); and computer self-efficacy, anxiety and attitudes (Hong, Abang Ekhsan &
Zaimuarifuddin Shukri 2005).

Currently, no research has been conducted on assessing both users’ technical


and cognitive abilities.

Out of those studies on ICT that were conducted, several were on teacher educators and teacher
trainees (Abang Ahmad, Hong & Aliza 2001; Megat Aman Zahiri, Baharuddin & Jamalludin
2007). One study found that teacher educators had positive attitudes and low levels of anxiety
when working with computers, yet most of them used computers mainly for preparing exercises
and examinations (mean 3.18). The lowest means were for using computers to support teaching
and learning (mean 2.04 and 2.06) (Abang Ahmad, Hong & Aliza 2001). When looking at the
factors that would significantly identify the relationship between demographic factors and
teacher trainees’ levels of competencies in using computers, Megat Aman Zahiri Megat Zakaria,
Baharuddin Aris and Jamalludin Harun (2007) found no significant relationship between gender,
course of study and academic achievement. Their research involved 379 teacher trainees from
Universiti Teknologi Malaysia and it was concluded that the teacher trainees have high ICT
competency levels. Seven ICT skills were tested: 1) ability to explain ICT-related hardware; 2)
handling of ICT hardware; 3) ability to identify ICT hardware/software problems; 4) ability to
use software for teaching and learning; 5) ability to use word processing and presentation
software; 6) ability to use the Internet for finding information/material; and 7) ability to use the
Internet for communication (Megat Aman Zahiri, Baharuddin & Jamalludin 2007).

In order to teach students to be ICT literate, teachers should have a sufficient


level of fluency in ICT.

Based on these research studies, it may be concluded that the expected ICT abilities that had
been included in an ICT assessment instrument thus far includes the ability to:
• plan;
• access;
• manage;
• integrate;

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 65
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

• evaluate;
• create;
• communicate;
• collaborate;
• reflect;
• incorporate selected information into one’s knowledge base;
• understand the economic, legal and social issues surrounding the use of information and
access, and use information ethically and legally;
• explain and handle ICT hardware;
• identify ICT-related problems/troubleshooting;
• use software for teaching and learning activities; and

• use the Internet for finding information/materials and communications.

This thesis incorporates these expected abilities into the ICT-literacy TBA
instrument.

Other findings from research studies in ICT-literacy also include factors that may encourage or
hinder the use of ICT in classrooms. These factors include: insufficient ICT pedagogical
training during their teacher training programs; lack of confidence; resistance to change; lack of
access to resources; and technical problems. These factors may constitute the main barriers that
inhibit the use of ICT for teaching and learning.

3.3.3. Commercialised ICT-literacy assessment tools

The higher education ICT proficiency model has also been adapted in commercialised ICT-
literacy tests. However, these commercially developed assessments (e.g. ECDL/ICDL, Prentice
Hall Train & Assess IT (TAIT), and iSkillsTM) require the candidate to pay a fee before they
attempt the test. The test fees range from US$22 to US$60 per candidate per test (some
assessment is a combination of more than one test). In Malaysia, the PC competency test
(PCCT) was introduced in January 1999 (Wong 2002). PCCT is a Windows-based
internationally recognised test, used to measure users’ computer literacy (in Wong 2002). The
test module was testing users’ ability to understand and apply the basic concepts of: ICT;
computer and file management; word processing; spreadsheets; database filing systems;
presentation; drawings; and information network services. Each of the test modules costs RM
60 (Malaysian Ringgit).

Investigating ICT-literacy assessment tool:


Page 66 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

In other countries, among the many commercially developed ICT-literacy assessment tools, the
European/International Computer Driving Licence (ECDL/ICDL), the TAIT testing tool, and
iSkillsTM by ETS are among the most highly used tools.
• European/International Computer Driving Licence (ECDL/ICDL)
The European Computer Driving Licence Foundation (ECDL Foundation 2013), registered
in Ireland, is the worldwide governing body and licensing authority for ECDL (European
Computer Driving Licence) and ICDL (International Computer Driving Licence). The end-
user computer skills’ certification program range consists of ten certification programs for
different level or end-user skills. They involve:
o EqualSkills: The program is taught and assessed using a paper-based workbook.
Designed for complete beginners and is open to everyone regardless of status,
education, age, ability or understanding;
o ECDL: The program evaluates candidates’ competency on basic concepts of IT
and the use of a personal computer and common computer applications at a basic
level of competence. It consists of seven modules (concepts of IT, using computer
and managing files, word processing, spreadsheets, database, presentation and
information and communication) that tested knowledge and skills in using a
computer;
o ECDL Advanced: A higher-level program designed for those who have
successfully reached ECDL/ICDL skills levels and wish to further enhance their
computer proficiency;
o ECDL CAD: Offers the opportunity to certify candidates’ core 2-dimensional
Computer Aided Design (2D CAD) skills to an international standard;
o ECDL ImageMaker: An ideal certification for second-level students, small
businesses, community groups or individuals who wish to acquire the skills to
work with digital images, without having to commit to the time and expense of a
professional-level digital image editing certification;
o ECDL WebStarter: A certification designed to give you the skills required to
design, create and maintain a website;
o ECDL Health: The European Commission Concerted Action (EDUCTRA)
identified that the informatics educational needs of health professionals are
different from those of other IT practitioners. Therefore, this certification would
address these needs and ensure sustainable and safe implementation of health
informatics systems through education, assurance, and empowerment of end users;
o E-Citizen: Designed to help candidates to get the most out of the Internet and
showing that it can be used for a range of purposes. The certification will address

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 67
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

how to electronically deal with government departments, find information,


buy products and communicate online with family and friends;
o EUCIP: A professional certification and competence development scheme aimed
at IT practitioners and undergraduates. Offers certification of ICT competence at
an intermediate educational level to ensure a common standard which is accepted
by industry, government and public organisations; and the
o CTP: The program has been designed to reflect the reality of professional IT
training. In order to become a Certified Training Professional (CTP), individual
trainers do need to provide, through the Trainer Evidence Record, evidence
(documentary and performance evidence) that they satisfy the skills and
knowledge requirements of the program.

The ECDL program itself consists of 13 different modules that represent different ICT
skills and competencies. The modules involve:
o Module 1 – concepts of information and communication technology (ICT);
o Module 2 – using the computer and managing files;
o Module 3 – word processing;
o Module 4 – spreadsheets;
o Module 5 – using databases;
o Module 6 – presentation;
o Module 7 – web browsing and communication;
o Module 8 – 2D Computer-Aided Design (CAD);
o Module 9 – image editing;
o Module 10 – web editing;
o Module 11 – Health Information Systems (HIS) usage;
o Module 12 – IT security; and
o Module 13 – project planning.

For an individual to achieve a solid base of skills and knowledge, attaining a minimum
level of ICT-literacy, candidates have to therefore complete and attain certification in a
minimum of four ECDL/ICDL modules. This is known as the EDCL/ICDL Start
Certification. Modules 2, 3 and 7 are compulsory, plus candidates can choose any one of
the other modules. For the ECDL/ICDL* Certification, candidates have to complete the
three compulsory modules (modules 2, 3 and 7), plus another four additional modules from
the 13 available modules. However, modules 8 to 13 are not available in all countries.

Investigating ICT-literacy assessment tool:


Page 68 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

For university students, once they have registered for ECDL/ICDL certification, they will
receive a ‘skills card’ on which they record their progress through the modules. Once all
the modules have been completed, a skills card is submitted in order to receive the
ECDL/ICDL certification. The skills card costs around €55, and the ECDL/ICDL exam
costs around €20 per module (Computer Training Centre UCC 2012). Any repeat exams
cost an additional €20 each. The questions for these ECDL/ICDL exams involve, however,
a form of a step-by-step instruction test where candidates need to follow and perform the
required task given (see ECDL Foundation 2013). It is proposed here that the test is too
general and the questions lack flexibility and do not encourage critical thinking and
analytical thinking. An example of the task is shown below:

o Open the file called stadium.doc from your Candidate Disk. Apply a shadow, small
caps font effect with a single line underlining of just the words to the title New
Stadium for Newburgh on page 1. [5 Marks]
o Delete the comment that is attached to the word Planning Chief on page 1. Attach
the comment Check the spelling of Regis to the name Joseph Regis. [5 Marks]

• Prentice Hall Train and Assess IT (TAIT) testing tool


The TAIT testing tool is a software application developed by Prentice Hall, which offers
both training and assessment of ICT competency. The content of the training and
assessment is based on a Microsoft Office application. The training component allows
candidates to learn and review topics on Microsoft Office applications using interactive,
multimedia, computer-based training. The assessment component offers a task-oriented
computer-based testing which is tested on the same topic discussed in the training (Prentice
Hall 2008). The TAIT managed to improve the withdrawal/fail rate of the introductory
computer course at DeVry-Kansas City (Pearson 2007). The withdrawal/fail rate dropped
from between 25% to 30% to 13.64%. In the end, teachers at the school agreed that
teaching and testing using the TAIT was a successful computer course teaching method.

The TAIT is self-paced, deliverable anywhere with Internet access, and adaptable to each
candidate’s level of knowledge. Unlike the traditional, lecture-based model of course
delivery, where students are passive recipients of information, the TAIT enhances course
delivery by actively engaging students (Speckler 2006). Moreover the TAIT tests on many
different skills and knowledge bases; and the contents from different subjects can be
added, deleted or customised. However, one pitfall of this testing tool is the lack of
flexibility and inability needed to recognise different ways of giving a correct answer to a
specific task. The tool might give the result as incorrect if the candidate used a different
method that the tool did not recognise, yet the candidate still accomplished the task

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 69
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

(Robbins & Zhou 2007). This tool is also too general for teachers’ needs. This tool
costs around US$60, which includes the right to use all the materials and tests in TAIT
for a whole semester.
• iSkillsTM
The ETS has developed its own version of an ICT-literacy assessment tool known as
the iSkillsTM that can be used to (ETS.org 2008):
o measure your students’ ability to navigate, critically evaluate and make sense of
the wealth of information available through digital technology;
o test the range of ICT-literacy skills aligned with nationally recognised ACRL
standards; and
o help identify where further curriculum development is needed so students have
the ICT-literacy skills they need to succeed.

TM
The iSkills assessment tool has ‘real-world’ simulated scenarios that test on topics and
examinees’ ability to manipulate technology needed to complete tasks such as extracting
information from a database, developing a spreadsheet, or composing an email (ETS.org
2008).

iSkillsTM is a combined effort from the University of California, Los Angeles (UCLA) and
the ETS. After attending the meeting of the ETS National Higher Education ICT Literacy
project, the director of UCLA Information Literacy Initiative was fascinated by the idea of
new literacy (ETS.org 2008). This enlightenment triggered the development of iSkillsTM as
an assessment tool. The test evaluates candidates’ ability to perform several scenario-based
tasks that also assess their ability to: define; access; manage; integrate; evaluate; create;
and communicate digital information (see Figure 3.6).

Investigating ICT-literacy assessment tool:


Page 70 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Figure 3.6. Example of the iSkillsTM assessment scenario-based question (ETS.org 2008)

The positive aspect of this tool is that, instead of giving step-by-step instructions of what to do
next, iSkillsTM describes the tasks that a candidate needs to accomplish and it is up to the
candidate how to perform the tasks. The only drawback of this evaluation tool is that the tasks
in the test are too general and not tailored to suit the ICT-literacy needs of trainee teachers.
However, the initial idea of using scenario-based tasks or task-based assessment was very
appealing and was therefore integrated into the ICT-literacy TBA instrument developed for this
thesis.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 71
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

3.4. Part-3: Task-based Assessment

Part-4

Trainee teachers’ ICT-


literacy assessment
(Final instrument testing)
Task-based assessment (TBA) tool

Skills Part-3
Knowledge

ICT-literacy assessment tool + Malaysian Smart School Standard Part-2

Existing Research + Standards for ICT-literacy Part-1

Figure 3.7. Part-3 of the research conceptual framework

This sub-section of the chapter concentrates on Part-3 of the conceptual research framework
(Figure 3.7).

Task-based assessments were designed to measure the knowledge, skills and judgement
required for competency in a given domain. Predominantly, task-based assessment was being
used in medical-based and language-based research (Long & Crookes 1992; Swanson, Norman
& Linn 1995; Robinson & Ross 1996; Smee 2003). The assumption made here is that, the closer
the tasks are to real world ones, the more valid the assessment will be.

3.4.1. Task-based assessment issues

Smee (2003) explored the reliability of the objective structured clinical examinations (OSCE).
It was a flexible test, based on a circuit of patient-based stations. At each station, the
participants interacted with a patient or a simulated patient, they were required to demonstrate
specified skills. Smee (2003) found that planning was very important, as the lack of planning
would affect the cost and timing of the test. The research also argued that there were limits to
what can be simulated; it also relied heavily on a task-specific checklist. This become a less
relevant criterion as the clinical experience of the participant increased.

Swanson, Norman and Linn (1995) revealed eight lessons to be learned from the task-based
assessment in medicine:

Investigating ICT-literacy assessment tool:


Page 72 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

1. the fact that participants are tested in realistic task situations does not make test design
and domain sampling simple and straightforward. Sampling must consider both context
(situation/task) and construct (knowledge/skill) dimensions, and complex interactions
are present between these dimensions;
2. no matter how realistic a task-based assessment is, it is still a simulation, and
participants do not behave in the same way they would in real life;
3. while high-fidelity task-based assessment methods often yield rich and interesting
participant behaviour, scoring that rich and interesting behaviour can be problematic. It
is difficult to develop scoring keys that appropriately reward alternate answers that are
equivalent in quality, both because of poor consensus on scoring keys and because of
scoring artefacts resulting from variation in response style;
4. regardless of the assessment method used, performance in one context does not predict
performance in other contexts very well. In-depth assessment in a few areas results in
scores that are not sufficiently reproducible for use in high-stakes testing;
5. correlational studies of the relationship between task-based test scores and other
assessment methods targeting different skills typically produce variable and
uninterpretable results. Validation work should emphasise the study of threats to the
validity of score interpretation, not general relationships with other measures;
6. because task-based assessment methods are often complex to administer, multiple test
forms and test administrations are required to test large numbers of participants.
Because these tests typically consist of a relatively small number of independent tasks,
this poses formidable equating and security problems;
7. all high-stakes assessments, regardless of the method used, have an impact on teaching
and learning. The nature of this impact is not necessarily predictable, and careful studies
of (intended and unintended) benefits and side effects are obviously desirable but rarely
done; and
8. neither traditional testing nor task-based assessment methods are a panacea. Selection of
assessment methods should depend on the skills being assessed and, generally, use of a
blend of methods is desirable.

Although it was generated from medical-based research, the eight lessons for task-based
assessments can be applied to other fields of study. Bachman (2002) and Robinson and Ross
(1996) agree with Swanson, Norman and Linn’s (1995) notion of eight lessons. The test design
and domain sampling for a task-based test are not simple and straightforward, and if the test
developers themselves are not sure about task specifications for the test, it is proposed here that
it may inevitably lead to vagueness in measurement.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 73
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

The demands and requirements for validation arguments and the kinds of evidence that needs to
be collected in support of inferences of ability, particularly in the context of performance
assessment, have been discussed extensively in the literature (Linn, Baker & Dunbar 1991;
Messick 1996; Bachman 2002). Messick (1996) and Linn, Baker and Dunbar (1991) caution the
task-based test developer of the need to address the criteria or technical issues in developing
these tests; having the task questions as close to the actual task does not mean that they are more
valid. The criteria that test developers need to address are: consequences; fairness; equity; bias;
transfer and generalisability; cognitive complexity; content quality and coverage;
meaningfulness; cost; and efficiency.

It is evident that trying to develop a task-based instrument requires a lot more


than just trying to simulate the real activity. Therefore, planning, test
reliability and validity were important tasks for the researcher in this study.

3.4.2. Task-based test design

Task-based assessments can be performed in two ways. One is when the criterion of success or
failure is based on the ability to perform the task. This is known as a performance-referenced
task-based test. The other is when the test is used simply to acquire samples of the participant’s
linguistic knowledge or generalised verbal ability, as in oral proficiency interviews or a multiple
choice reading comprehension test, which is known as a system-referenced task-based test
(Robinson & Ross 1996).

The system-referenced task-based test is easily generalised, constructed and administered.


However, just as any pen and paper test, it lacks face validity and may not represent the actual
requirements of the test. The test also requires the test developer to examine each skill’s
components separately, rather than as a whole (Robinson & Ross 1996). The performance-
reference task-based test, on the other hand, measures the criterion performance directly. Apart
from this, there are two other intersecting dimensions: the direct and indirect test (Robinson &
Ross 1996). They involve:
• direct system-referenced test: obtaining proof that demonstrates the skills. For example,
an oral interview which is then analysed with reference to its component parts, such as
the grammar or vocabulary;
• indirect system-referenced test: requires the participant to demonstrate knowledge of
specific aspects of the system, such as multiple choice questions about vocabulary;

Investigating ICT-literacy assessment tool:


Page 74 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

• direct performance-referenced test: what the participants have to do in the test exactly
simulates what the participants would have to do in the real world; and
• indirect performance-referenced test: the criterion and test performance are not similar.
This is due to the breaking down of the criterion performance into more manageable
subtasks that are then examined separately.

A few points that a researcher needs to ponder over are that indirect tests involve some loss of
validity when compared to direct testing. Instead of requiring inferences about the relationship
of the indirect test to the criterion performance, direct tests measure the criterion performance
directly, and can be interpreted as either mastery or non-mastery (Griffin & Nix 1991).

3.5. Chapter-3 Summary

This chapter represented Phase-1 of this research study (for the research design see Chapter-4
section 4.3). This chapter explored the connection between ICT-literacy and the building of a
knowledge society, as well as understanding the school's role where the knowledge society is
facilitated. Findings from other studies on ICT-literacy assessment; standards developed for
ICT-literacy; and ICT-literacy circumstances in Malaysia were also explored.

These findings form part of the constructs identified for developing


the proposed ICT-literacy TBA instrument.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 75
Chapter-3: Review of the Literature: Phase-1 Preliminary review process

Investigating ICT-literacy assessment tool:


Page 76 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

4
Design and Methodology

4.1 Overview

The previous chapters have established the theoretical framework on which this study is based.
The study’s three key research phases involve: identifying indicators to assess ICT-literacy of
trainee teachers in Malaysia; developing a suitable TBA instrument for assessing the ICT-
literacy level of trainee teachers in Malaysia; and identifying how the TBA instrument could
evaluate the level of ICT-literacy of trainee teachers in Malaysia (see section 4.3 and Figure 4.1
which explain research design).

This chapter discusses the research design and methodological approaches which inform the
choice of data collection techniques to conduct the empirical part of this thesis. Mixed methods
were used to collect the data. The selection of particular data collection strategies is discussed,
alongside the methodological techniques used here; they involve seeking expert opinion and
development of the TBA instrument. This chapter also discusses and explores the proposed data
analysis techniques and explains how the reliability and validity of the TBA were tested.

The chapter sections are organised as follows:


• The choice of methods: research techniques;
• The research design;
• Data analysis techniques;
• Validity and reliability;
• Ethical issues; and
• Chapter-4 summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

4.2 The Choice of Methods: Research Techniques

A mixed methodology technique was selected because it better facilitates answers to the
research questions, and provides for a richer understanding of the qualitative/quantitative nature
of the research. By employing these mixed techniques the research stands a better chance to
provide comprehensive findings that are more likely to eliminate the possibility of ensuing bias
created by the researcher’s interpretation of the data (Creswell & Plano Clark 2007).

To better understand what was considered to be the important ICT knowledge and skills that a
trainee teacher should have, qualitative research techniques were chosen. Firstly, a review of the
literature was conducted; and secondly, insights and experiences from a panel of experts (PoE)
were obtained. These PoE members’ perspectives and views were important, as most of the
models for ICT-literacy suggested in the literature were studies from Australia, the USA and
Europe.

It is vital for the study to understand how these internationally based models
compare with the Malaysian trainee teachers’ ICT knowledge and skills.

For this reason, the PoE was carefully selected to enable a cultural comparison. The panel
consisted of: the current MSS ICT coordinator; academics in the area of educational technology;
and a relevant officer from the Malaysian Ministry of Education. The aim of the panel
consultation was to decide on which pre-identified ICT knowledge and skills best suit
Malaysia’s school environment.

Consequently, the qualitative research outcomes inform this study on how to attain the answer to
the second research question:

How can the proposed ICT-literacy TBA instrument evaluate trainee


teachers’ ICT-literacy levels? (see Chapter-1 for the background discussion).

Therefore, an assessment instrument was developed based on the test-items agreed to by the PoE
members. The quantitative research methods were later used in order to evaluate the current
level of Malaysian trainee teachers’ ICT-literacy. Undoubtedly, this study benefits from a mixed
method research design and the reasons for applying it are (Creswell & Plano Clark 2007):
• using only one approach to research is inadequate when addressing the research
problems;
• the quantitative design part can be enhanced by the qualitative data;

Investigating ICT-literacy assessment tool:


Page 78 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

• the qualitative data provide richer and better explanations for the quantitative design;
and

the qualitative data make it possible to address the problem adequately,
and quantitative results are then used to further understand the problem.

4.3 The Research Design

The study was divided into three phases, based on the three processes in the research design
(Figure 4.1):
1. Phase-1: preliminary review;
2. Phase-2: expert judgement on ICT-literacy indicators; and
3. Phase-3: instrument validation and testing.

The following sub-sections – 4.3.1, 4.3.2 and 4.3.3 – explain each of these phases.

Figure 4.1. Research design

4.3.1 Phase-1: Preliminary review

The first part was the preliminary review where information on ICT-literacy and ICT-literacy
assessment was reviewed by the researcher (Figure 4.1). The ICT-literacy standards and the
MSS project requirement for ICT-literacy were also examined. ICT-literacy indicators that were
suggested in the literature as being relevant to ICT-literacy assessment were compiled.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 79
Chapter-4 : Design and Methodology

4.3.2 Phase-2: Expert judgement on ICT-literacy indicators

This study adopted a reiterative Delphi technique in order to verify suitable ICT-literacy
indicators in an educational context for trainee teachers in Malaysia. Seven experts were chosen
by the researcher for this process. This phase was divided into two parts: Delphi-1 and Delphi-2
and involved five discrete steps: 1) selecting PoE members; 2) LI evaluation; 3) feedback
summarisation; 4) LI re-evaluation; and 5) feedback summarisation), as depicted in the middle
section of Figure 4.1. These five steps are clarified as:
• Step-1: selecting PoE members. The PoE was chosen by the researcher, based on the
member’s expertise and background in ICT education and (training) trainee teachers;
• Steps-2 to 5: LI evaluation and feedback. Still within Delphi-1, the PoE members were
given a list of ICT-literacy indicators (deemed the preliminary TBA) that had been
identified by the researcher in the preliminary literature review (Phase-1). They were
asked to verify the suitability of these ICT-literacy indicators and provide their views
about them; and
• Steps-6 to 9: PoE validation activities. Then, in Delphi-2, a new version of the TBA
instrument was developed based upon the ‘PoE-verified ICT-literacy indicators’. The
PoE members were again required to validate the suitability of the new TBA instrument
for evaluating the ICT-literacy for trainee teachers, as well as suggesting improvements
for the new TBA instrument;

4.3.3 Phase-3: Pilot testing, validation and final instrument testing

After the required amendments were made to the TBA instrument (based on Delphi-2), it was
again validated and tested on real trainee teachers:

• Pilot testing-1: After completing Step-9, an instrument pilot testing was conducted on the
data by the researcher to confirm the TBA’s test-item fit to the Rasch IRT model.
Misfitting test-items were either restructured or discarded;

• Pilot testing-2: The TBA instrument was subjected to the second pilot test and further
amendments were made to misfitting test-items; and

• Final instrument trial: The final version of the TBA instrument was then re-tested on
all semester-four trainee teachers from eight faculties in the Sultan Idris Education
University (UPSI). A total of 382 trainee teachers were invited to participate with all
faculties represented except for Science and Mathematics. However, only 148 trainee

Investigating ICT-literacy assessment tool:


Page 80 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

teachers were willing to participate in this study. The difficulty of getting more willing
participants was partly due to time constraints and the location of the computer
laboratory used for this study.

Using the three above-mentioned phases of research design as the roadmap, this thesis continues
by explaining the data analysis technique employed.

4.4 Data Analysis Technique

The Delphi technique and the Rasch IRT model were applied for the qualitative and quantitative
parts of this study respectively (see sub-sections 4.3.1 to 4.3.3 for the description of the
qualitative approach).

4.4.1 Qualitative data analysis


The Delphi technique was chosen as the most suitable philosophical/research approach for
conducting the expert judgement phase of this study. This technique is described as:

a method for structuring a group communication process so that the process is


effective in allowing a group of individuals, as a whole, to deal with a complex
problem (Linstone & Turoff 2002, p. 3).

The Delphi technique usually involves sending a questionnaire, which may be structured or
relatively unstructured, to the respondents who in this study make up the PoE (see Step 1, Figure
4.1). The responses were collected and a summary of all the feedback was created. The original
questionnaire was then redistributed, accompanied by the anonymous summary of responses.
The PoE members were then invited to confirm or to modify their previous response based on
other experts’ views listed in the anonymous summary of responses. They were also allowed to
contradict other experts’ opinions. This procedure is repeated for a predetermined number of
rounds or until some predetermined criteria has been fulfilled. The PoE may also be asked to
give an explanation or justification for their response. Thus Delphi typically involves a number
of rounds, feedback of responses to panellists between rounds, opportunity for panels to modify
their responses, and anonymity of responses. According to Linstone and Turoff (1978, cited (in
Mullen 2003)), a suitable minimum panel size is seven because accuracy deteriorates rapidly if it
is any smaller and improves if it is larger.

The Delphi technique is a method for structuring a group communication process, and is largely
used in the USA for technological forecasting. The technique is also used in other contexts that
require judgemental information that includes: normative forecasts; determining values and

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 81
Chapter-4 : Design and Methodology

preferences; simulated and real decision-making; identification of potential measures that might
be taken to explain a given problem and assessment of the proposed measures with regard to
their feasibility, desirability and effectiveness.

The Delphi technique does have a few similarities with ordinary voting procedures. However,
the difference in this technique is that it allows participant feedback and also gives the
opportunity for the participant to modify or refine their views based on their reaction to the
collective views of the group (Linstone & Turoff 2002).

For this study, the technique allowed an anonymous and subjective judgement to be reached on
a collective basis, as time and cost constraints made frequent meetings unfeasible. Anonymity
was important so as to avoid quantity or strength of personality dominating proceedings. Using
an open-ended questionnaire, a list of ICT-literacy indicators (LI) advocated in the literature
was distributed via email to the PoE members. The responses were collected by the researcher
as email replies and a summary of the responses was generated. The initial questionnaire that
each PoE member responded to was then redistributed back to its respective owner by the
researcher, accompanied by a summary of the collective PoE responses. The PoE members were
invited to confirm or to modify their previous responses based on other experts’ responses. The
PoE members were also allowed to contradict or agree with other experts’ answers. This
procedure was repeated for a second time, at which point consensus was achieved by the PoE
membership. Thus there was no need for another round of the Delphi technique.

The same PoE members were also employed for the second part of the qualitative study, which
involved validating the task-based questions that were developed based on the previously
agreed upon ICT-literacy indicators (see sub-sections 4.3.1 to 4.3.3 above). The PoE was asked
to validate whether the task represented appropriate ICT indicators and whether the task would
be suitable to evaluate the ICT-literacy of trainee teachers in Malaysia. Again, the Delphi
technique was employed with the PoE membership, and by the end of the second round
consensus was achieved.

4.4.2 Quantitative data analysis

For the quantitative part of this study, the Rasch IRT model was employed. The Rasch IRT
model is assumed to be invariant across different groups within a research population and across
populations (Hambleton & Murphy 1991; Swaminathan 1999). It requires that both the test-
items and participants conform to the Rasch IRT model before claims regarding the presence of
skill or ability can be considered valid. Therefore, under this Rasch IRT model, misfitting

Investigating ICT-literacy assessment tool:


Page 82 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

responses require a reason for the misfit, and may be excluded from the data set if they fail to
address the expected skill or ability. The Rasch IRT model also provides a way of measuring
the quality of the test-items by confirming their suitability for participants and how well they
measured participants’ abilities (Izard 2005; Wu & Adams 2007).

The Rasch IRT model proposes that a test analysis would only be worthwhile if it were
individualised, with separate parameters for the test-items and participants. The Rasch IRT
model observes the interaction between an individual test-item and an individual participant.
This establishes a transition from population-based classical test theory (CTT) that emphasises
standardisation and randomisation (Van der Linden & Hambleton 1997; Bechger, Maris,
Verstralen & Béguin 2003; Magno 2009).

The Rasch IRT model generally utilises the response pattern. It assumes that participants with a
low attribute have little chance of guessing the correct answer and participants who achieve a
high attribute will almost certainly choose the correct answer (Nunnally & Bernstein 1994).
Central to the idea of the Rasch IRT model is the probability principle. A person’s response to a
particular test-item is never certain. It is always influenced by human error. Thus a probabilistic
approach must be employed. In the Rasch IRT model, probabilities are introduced through
consideration of the odds that a person would give a correct response to a test-item. This is
known as logit. The logit is a mathematical model that converts both difficulty and ability into
the same units. The logit is a loge of the odds of a correct response given. The Rasch
dichotomous model equation used in the Quest analysis describes the probability of observing a
specific score as (Adams & Khoo 1996):

Where is person n’s response to item i, is the ability of person n, is the score
assigned to one step in item i, and is the difficulty of the one step in item i. This relationship is
illustrated in Figure 4.2. It depicts the item characteristic curve (ICC). During a test-item pilot
testing, both the test-items and the person must conform to the ICC. Non-conforming items or
persons will be rejected or re-evaluated (Adams & Khoo 1996).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 83
Chapter-4 : Design and Methodology

1.0

0.5

Probability

0.0

Figure 4.2. Item characteristic curve (ICC)

Each test-item will have its own ICC. The ICC is used to describe two technical properties: 1)
difficulty of the item; and 2) discrimination. Difficulty of the test-item describes where the test-
item functions along the person ability scale, while discrimination describes how well a test-
item can differentiate between persons having abilities below the test-item location and those
having abilities above the test-item location. This property essentially reflects the steepness of
the ICC in its middle section. The steeper the curve, the better the test-item can discriminate
(Van der Linden & Hambleton 1997; Bond & Fox 2007).

In a Rasch dichotomous model format, if a given task is successfully completed, the person will
score one on the test-item. If it is not completed, then the score is zero. No credits are given to an
almost correct or partially completed test-item. This format can be extended to include partial
credit scales (τ). A partial credit model (PCM) is used in a situation when a person’s attempt at
completing a test-item can be grouped into several ordered responses. The PCM represents a
person’s ability as "… a location on a continuum of increasing competence" (Masters 1999, p.
101). It incorporates the possibility of having differing numbers of response opportunities for
varied items on the same test. Consider the possibility of tests in which one or more intermediate
levels of success might exist between complete failure and complete success. Part marks are
awarded for partial success. Each part mark must be awarded in an ordered way, so that each
increasing value represents an increase in the underlying ability being tested (Bond & Fox
2007). This increasing ability can be defined in two ways:

1. Levels of partial understanding: These are the results of an examinee’s level of


understanding a test-item. A set of categories for the test-item is built upon the responses
given by the examinee (Masters 1999).

Investigating ICT-literacy assessment tool:


Page 84 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

2. Multistep problems: Multistep problems are presented in a complex problem that would
require the completion of a number of steps (Masters 1999). Credit is given to the
number of steps that the examinee manages to complete.

For this thesis, the Quest interactive test analyses system was used because it offers a
comprehensive questionnaire testing and analysis environment based on the Rasch IRT model.
The Quest estimate can be used to construct and validate variables based on both dichotomous
and polytomous observations (Adams & Khoo 1996). Table 4.1 lists the output files produced
by Quest:
Table 4.1. Quest output files

• Variable map
• Item fit map
• Case fit map
• Kidmap
• Summary of item estimates
• Summary of case estimates
• Item analysis for observed responses
• Log file

One of the output files that Quest offers is the variable map. This map provides a visual
description of the test-items and participants’ performance. This procedure was employed in
this study to estimate the difficulty levels (performance threshold values) of the test-items, and
to develop a common scale for each data set. The smaller the proportion of correct responses,
the more difficult a test-item is, hence the higher the test item’s scale location. As a result, the
person’s performance (referred to as a case in Quest) and test-item locations are estimated on a
single scale.

Figure 4.3 below is an example of a Quest variable map. The map shows that the participants’
scores were distributed relatively symmetrically around the scale average value. The average
value of the test-item threshold is set at zero, with more difficult items positioned above the
average test-item threshold and the easier test-items below the zero threshold value. As the test-
items increase in difficulty, they are shown on the variable map relative to their positive logit
value, whilst negative logit values are indicated on the map representing the easier items.
Eleven test-items were located above 0 (average) and 10 test-items were located below 0.

Test-item 29 and test-item 34 were regarded as being particularly difficult. Four participants had
scored below 0, indicating low ability, with one having a particularly low score that is below –
1.0 logit. The participants’ scores were predominantly above 0, demonstrating that they have
relatively high ability for ICT-literacy.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 85
Chapter-4 : Design and Methodology

harder items

The figures on the extreme left of


the map represent the logit scale
on which both test-items and
cases (person) are calibrated.

The XXs on the left hand side of


the map represent the distribution
of case (person) estimates over
the logit scale. average

The figures on the right hand


side of the map represent test-
items plotted according to their
difficulty.

easier items

Figure 4.3. Case (person) and test-item distribution on a single scale

Aside from that there is a Quest test-item fit map. As mentioned previously, the Rasch IRT
model requires that both the test-items’ and the participants’ (cases) performance conform to
the Rasch IRT model, before claims regarding the presence of skill or ability can be considered
valid. Test-items or participants (cases) that do not fit with the model require further
investigation. Figure 4.4 is an example of a test-item fit map.

One of the key things to look for is the infit mean square (INFIT MNSQ) value. The INFIT
MNSQ measures the consistency of fit of the participants and test-items. The acceptable range
of the mean square statistics for each test-item in this study was taken to be from 0.77 to 1.30
(Adams & Khoo 1996). Values outside this acceptable range, which is above 1.30, indicated
that

Investigating ICT-literacy assessment tool:


Page 86 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

these test-items do not discriminate well; and if the INFIT MNSQ value is below 0.77, the
test-items provided redundant information.

Figure 4.4. Test-item fit map

The test-item fit map above shows that the INFIT MNSQ values of four test-items were less
than 0.77, with one test-item that scored more than 1.33. Further investigation of these five test-
items is needed in order to determine whether these test-items should be kept or discarded from
the instrument.

With the research design and data analysis technique explained, the participants in this study are
described in the following section.

4.5 Participants

There were two categories of participants for this study: 1) the qualitative study participants (the
PoE members); and 2) the quantitative study participants (the Malaysian trainee teachers).

4.5.1 Qualitative study participants – PoE members

There were seven PoE participants for the qualitative study and they were selected based on
their educational and occupational backgrounds. The PoE members were three academics in the
field of educational technology, one officer from the Malaysian Ministry of Education, and three
teachers from the MSS.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 87
Chapter-4 : Design and Methodology

4.5.2 Quantitative study participants – Malaysian trainee teachers

One of Malaysia’s public universities, the Sultan Idris Education University (UPSI), was
chosen for the location to conduct the research. This is the only teachers’ university in
Malaysia with the sole purpose of training pre-service teachers. The university was first
established in 1922 as a teachers college and the university was then known as the Sultan
Idris Training College (SITC). It was proposed by the then Deputy Director of Malay
Schools, Sir RO Winstedt, who envisioned a central college to train teachers and widen the
educational scope for Malaysia during that time. In 1987 the SITC was upgraded and renamed
Sultan Idris Teachers Institute (IPSI). New courses were made available leading to a degree that
would be conferred by another university (the Universiti Putra Malaysia). IPSI was upgraded
to a full university bearing its current name on 1 May 1997 in line with the plans by the
Malaysian government to increase the number of graduate teachers in both primary and
secondary schools (UPSI 2010).

For this study, the participants were undergraduate students who were currently enrolled in a
Bachelor of Education degree at UPSI. The university has eight faculties offering both
undergraduate and postgraduate degrees. The students are generally between 20 to 30 years old,
and of mixed ethnicity (Malay, Chinese, Indian, Indigenous and others). Participants from the
same semester across eight different faculties were invited to take part.

Different participants were used for each of the three activities (pilot test-1, pilot test-2 and final
instrument testing) in this third phase of the ICT-literacy TBA instrument development (refer to
Figure 4.1). During the pilot testing-1 session, 16 trainee teachers were randomly chosen and
were willing to participate. Twenty (out of the 50 invited) trainee teachers from the Faculty of
Business and Economics participated in pilot testing-2. For the final instrument testing process,
148 (out of the 382 invited) trainee teachers from the semester-four batch, representing all
faculties in the university, agreed to participate.

4.6 Data Collection

This section of the thesis explains the data collection process involved for all three phases (see
sections 4.3.1 to 4.3.3). This study uses different data collection techniques for each phase. For
Phase-1, there was an in-depth literature analysis conducted, enabling the researcher to identify,
compare and contrast the ICT-literacy indicators currently being used around the world.

For Phase-2, the Delphi technique was used, allowing anonymous judgement of the ICT-literacy
indicators and the validation of the draft TBA instrument by a group of purposely selected

Investigating ICT-literacy assessment tool:


Page 88 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

experts (PoE). Phase-3 involved implementing the Rasch IRT model through the Quest
interactive test analysis system to validate and test the reliability of each version of the TBA
instrument. This data analysis technique was also used to evaluate trainee teachers’ level of
ICT-literacy. The following sub-sections explain each of these research phases in more detail.

4.6.1 Phase-1: Preliminary review

This phase involved an in-depth investigation of the literature on ICT-literacy (see Chapter-3).
The common indicators for ICT-literacy were identified based on the MSS requirements, current
ICT-literacy assessment instrument, ICT-literacy standards and past literature (Smart School
Project Team 1997; International ICT Literacy Panel 2002; Punie & Cabrera 2005; McNaught
2006; Katz & Macklin 2007; Markauskaite 2007; ANZIIL 2008; Calvani, Cartelli, Fini &
Ranieri 2008; ISTE 2008; ACRL 2009).

4.6.2 Phase-2: Expert judgement on ICT-literacy indicators

This next phase utilised the Delphi technique in order to identify additional potential indicators
that might be taken to explain a given ICT-literacy problem, and assessing the proposed
indicators with regard to their: feasibility; desirability; importance and validity (Linstone &
Turoff 2002). While many different voting scales have been utilised for Delphi, there were four
scales, or voting dimensions, that seem to represent the minimum information that must be
obtained if an adequate evaluation is to take place (Linstone & Turoff 2002). The first two were
desirability and feasibility. These two scales/voting dimensions may induce a good deal of
discussion among participants and may lead to the generation of new options. Importance and
validity (or confidence) would usually be used to understand the underlying assumptions or
supporting arguments. A person may think an invalid test-item is important (because others
believe it to be true) or that a true test-item is rather unimportant.

After the ICT-literacy indicators had been identified, they were emailed by the researcher to the
PoE members for their evaluation. This activity is important as the study calls for a group of
experts who can professionally deliberate on the topic of technology, education and the
Malaysian school system.

As mentioned before, the expert judgement on the ICT-literacy indicators were implemented in
two parts (Delphi-1 and Delphi-2) (see Figure 4.1). For Delphi-1, the indicators and the
explanations of its expected ICT-related skills were sent to each PoE member by email. All
members were given two weeks to rate the indicators on a four-point Likert scale (not relevant,
fairly relevant, relevant and extremely relevant), and give their opinions and other suggestions.
The feedback expected from each PoE member was to remain anonymous. After receiving their

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 89
Chapter-4 : Design and Methodology

feedback, a summary of the first round was put together by the researcher and redistributed back
to the PoE, along with a copy of the member’s own feedback. For the second Delphi round,
each PoE member was given one week to review the summary from the first round and to see
what others were saying about the indicators. This time the PoE members were allowed to
change their rating or opinion, or even agree/challenge another panel member’s opinion. It was
planned that after this second round, if there were any issues that required a more detailed
discussion, a third round would occur.

The Delphi-2 process began after the findings from Delphi-1 had been analysed by the
researcher. A draft assessment instrument was developed based on the indicators agreed by the
PoE. A test instrument specification matrix was implemented by the researcher to the PoE data
in order to ensure that all the agreed indicators (now known as the learning domain) were
included in the draft TBA instrument (see Table 4.2). The test instrument specification matrix is
a useful tool to ensure that the test-items for the TBA instrument were organised in a continuum
from the lowest ability to the most advanced in using ICT tools (McKay 2000). The horizontal
axis depicts the instructional objectives that are based on Gagne’s learned capabilities, with the
vertical axis used for the skill development/learning domains or tasks.

Investigating ICT-literacy assessment tool:


Page 90 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

Table 4.2. Example of a test instrument specification matrix

Instructional Objectives: ICT-literacy


Declarative Procedural Meta-
cognitive

Band-A Band-B Band-C Band-D Band-E Band-F


Verbal Intellectual Intellectual Cognitive Cognitive Meta-cognitive
information skill skill strategy strategy knowledge
skill

Concrete Basic rule Higher order Identify sub- Knowing the Strategic or
concept rule tasks ‘how’ reflective
Discriminates knowledge
Knows basic Problem- Recognises Recall simple about how to go
terms Understands solving unstated prerequisite about solving
concepts & assumptions rules & problems,
Knows ‘that’ principles Applies concepts cognitive tasks,
concepts & to include
principles to Integrates contextual and
new learning from conditional
situations different areas knowledge and
into a plan for knowledge of
solving a self
problem
ICT-literacy Total:
indicators:
• Evaluate
• Integrate
• Internet navigation
& search
• Production and
analysis
• Access
• Reflect
• Communicate/
collaborate
• Assess
• Create
• Plan/define
• Manage
• Understanding and
handling ICT tools
Total:
Adapted from McKay (2000)

In Table 4.2, the learning domain is shown here as a continuum, beginning with simple concepts
at one end, developing into more complex tasks at the other end. The matrix also helps in
revealing areas that the researcher might not include (either deliberately or unintentionally) in
the assessment instrument. The instructional objectives consist of three categories of specific
knowledge (McKay 2000).

The first is defined as declarative knowledge that is divided into two levels of skill:
• verbal information: knowing isolated rules; and
• intellectual skill: knowing how to discriminate between concepts and principles.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 91
Chapter-4 : Design and Methodology

For trainee teachers, their ICT-based declarative knowledge can be as simple as the ability to
name suitable computer-based applications to be used to complete a certain task, or having the
understanding of the concepts and principles in using different ICT tools.

The second category is defined as the procedural knowledge and is divided into three levels:
• intellectual skill: higher order rules for problem-solving;
• cognitive strategy: recognising sub-tasks; and
• cognitive strategy: ability to integrate learning across learning domains for
implementing a comprehensive plan of action.

Trainee teachers’ procedural knowledge in ICT tool usage can be observed by demonstrating
that they know how to follow, and the steps needed, to solve a given computer-based task. As
such, their performance on this task reflects whether they understand the required information,
rules and concepts for using each ICT tool. The trainee teachers were able to use different
computer applications and tools effortlessly. They were also able to integrate different
information from different formats (e.g. spreadsheets, pdf documents, images, videos, etc.) and
create new resources for their instructional strategies.

The third instructional objective category is defined as meta-cognitive knowledge:


• meta-cognitive knowledge: strategic or reflective knowledge about how to go
about solving problems or tasks.

Meta-cognitive knowledge involved the trainee teachers’ demonstrated ability to understand the
computer-based task given, and understand how to proceed (in a different digital environment)
with a task, based on their previous knowledge without having to be told what to do next.

After the first draft TBA instrument was developed, it was then sent to every PoE member to
evaluate the suitability of the questions to be given to trainee teachers (see sub-sections 4.3.1,
4.3.2 and 4.3.3). The PoE members were also required to give their opinion on whether the
questions did represent what they had expected from the indicators that were previously agreed
upon. Similar to the Delphi-1, a summary of the findings from this round was distributed back to
the PoE members, along with a copy of their own feedback. The PoE members reviewed their
own answers and the comments of other panellists, and returned their new feedback to the
researcher. Once again, a third round of the Delphi technique was planned to be conducted if
there were any unresolved issues at this stage.

Investigating ICT-literacy assessment tool:


Page 92 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

The key outcome of these phases of the research was that a draft TBA instrument for
evaluating ICT-literacy for trainee teachers was developed and validated by the PoE members.

4.6.3 Phase-3: Instrument validation and testing

This third research phase involves trainee teachers as the participants and was divided into three
processes: pilot testing-1, pilot testing-2 and the final instrument testing (see Figure 4.1).
1. Pilot testing-1: To validate the newly designed ICT-literacy assessment instrument, it was
analysed using QUEST, to ensure the reliability of the instrument. It is an important process
in order to define the level of difficulty of each test-item and to establish its accuracy as a
measuring device (Bateman & Griffin 2003). Twenty undergraduate trainee teachers from
UPSI were invited to participate in pilot testing-1; however, only sixteen trainee teachers
were willing to participate. Findings from this process allowed the researcher to identify
test-items in the draft TBA instrument and the test-item evaluation checklist that needed to
be added, deleted, re-worded, or re-arranged.
2. Pilot testing-2: The draft TBA instrument was tested again on 26 different trainee teachers.
Any required final amendment to the assessment instrumentation was performed at this
stage.
3. Final instrument testing: Trainee teachers were given a question booklet with a set of ICT-
based tasks to be completed. The tasks involved normal computer-based activities that a
teacher in a ‘smart school’ environment should be able to execute. Some of the tasks were
not as clear-cut with no step-by-step instructions to tell them what to do next. Instead, the
trainee teachers needed to figure out a solution for themselves; for instance: what would be
the most suitable computer application to use (plan/define), or what other information they
need before they could accomplish that task (plan/define, access and integrate).

These (instrument testing) sessions were also recorded with a screen capture program
(Screen2exe) that generated visual files of the entire session. These recorded screen capture
files were matched with their corresponding participants based on their research
identification number. Screen2exe is a freeware tool that can capture screen activity and
save it as an .exe file. It can capture mouse movement, clicks and even optional audio
comments from the microphone (WebAttack Inc. 2010). However, only the screen capture
was recorded for this thesis. The researcher had been granted permission by the RMIT
University research ethics committee to use this approach as no identifiable features of the
participants (face, voice) were to be recorded.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 93
Chapter-4 : Design and Methodology

4.7 Validity and Reliability

The two most important and fundamental characteristics of any measurement procedure are
reliability and validity (Izard 2005). Measurement experts believe that every measurement
device should possess certain qualities, and reliability and validity are the two most common
(Nunnally & Bernstein 1994; Cohen 2007; Creswell & Plano Clark 2007). Any kind of
assessment must be developed in a way that gives accurate information about the performance
of the individual being evaluated. Both validity and reliability tested for the ICT-literacy TBA
instrument are explained in the following sections.

4.7.1 Validity
Validity is the extent to which the researcher can glean meaningful inferences drawn from
scores on a test or assessment that can be justified empirically and theoretically (Callingham
2003; Creswell & Plano Clark 2007). In short, validity can be summarised as being concerned
with what the instrument is measuring and how well it measures. Yet it should be noted that
what is to be evaluated is not the instrument itself, but the ‘use’ of the instrument for a particular
purpose.

Construct validity and content validity can be examined by considering the fit of the data to the
model of both test-items and the participants and comparing the obtained difficulty order of the
test-items with the order anticipated by the researcher (Wright & Masters 1982). Good fit to the
model suggests that the test-items were measuring the same unidimensional construct, thus the
assessment instrument has validity.

It has also been suggested that validity standards for performance-based assessments, in which
participants provide some form of product or performance, should be different and relate more
directly to the specific performance (Messick 1996). Messick (1988) proposed that for
performance measurement assessment, the validity should be indicated through six general
standards for evidence of construct validity (Table 4.3).

Investigating ICT-literacy assessment tool:


Page 94 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

Table 4.3. Proposed validity test for educational and psychological measurement

Aspects of
Description
validity
Showed evidence of content relevance, representativeness and technical
Content
quality
Theoretical rationales for the observed consistencies in instrument response,
Substantive along with empirical evidence that the theoretical processes are actually
engaged by participants in the assessment tasks
Appraises the fidelity of scoring structure to the structure of the construct
Structural
domain at issue
Convergent and discriminant evidence from multitrait-multimethod
External
comparisons, as well as evidence of criterion relevance and applied utility
Examines the extent to which score properties and interpretations generalise to
Generalisability and across population groups, settings and tasks, including validity
generalisation of test-criterion relationships
Appraises the value implications of score interpretation as a basis for action as
well as the actual and potential consequences of test use, especially in regard
Consequential
to sources of invalidity related to issues of bias, fairness and distributive
justice
Source: Messick (1996)

These six aspects of construct validity are not separate and substitutable validity types, but in
fact are interdependent and complementary forms of evidence (Messick 1996). How each of the
above aspects was compiled in this thesis is discussed below.

1. Content aspect: In this thesis, the content aspect of validity was met through expert
consensus in Phase-2 of the research design. Young (2003) claims that content validity is
the process of determining if a model or simulation seems reasonable to individuals who are
knowledgeable about the process being studied. He also claims that it is solely a subjective
review of the behaviour of the model by domain experts. This type of validity is defined as
the process of determining if a model or simulation seems reasonable to individuals who are
knowledgeable about the process being studied.

The TBA instrument for this thesis addressed a range of ICT-based knowledge and skills,
including the use of basic computer applications, ability to do basic picture editing, skills in
using basic media technology tools and confidence in utilising the internet. It also demanded
the trainee teachers’ general thinking skills, such as planning and carrying out
investigations, interpretation of findings, ability to generalise findings to unfamiliar
situations and also justify their thinking. Thus the ICT-literacy TBA instrument
demonstrates content validity as it drew on a range of ICT-based knowledge and skills, and
also general thinking skills in an appropriate context for these trainee teachers.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 95
Chapter-4 : Design and Methodology

2. Substantive aspect: The substantive aspect adds to the content aspect of instrument validity
the need for observed evidence of response consistencies or performance regularities
reflective of domain processes (Messick 1996). The primary aim for this aspect is to ensure
that an authentic assessment is reflected and to ensure that the test-items in the instrument
are actually operative tasks. In this study, the pilot study process provided the empirical
evidence for a substantive aspect.

3. Structural aspect: According to the structural aspect of instrument validity, the rational
development of a scoring criteria and rubrics are as important as the selection or
construction of relevant and authentic assessment tasks (Messick 1996). The scoring for the
test-items in this study were either dichotomously (yes/no) or using a partial credit model.
The participants needed to demonstrate whether they were able to complete each test-item,
as part of the aim of this study is to be able to identify the area or ICT-based learning
domain which were the participants’ area of weaknesses and strengths.

4. External aspect: The external aspect emphasises two sets of relationships: 1) empirical
consistencies in both convergent and discriminant correlation patterns; and 2) measures of
the main construct and exemplars of different constructs. Fiske (2002) described external
aspects as referring to: firstly, a pattern of relationships between assessment scores and
criterion measures in applied situations; and secondly, the relationships among the
assessment scores.

In this study, the use of the Adams and Khoo (1996) Quest estimate and the Rasch IRT
model helps confirm the external aspect of this instrument validity. Based on the probability
principle, the Rasch IRT model utilises the response pattern, where it can differentiate
between participants with low ability and participants with high ability. Participants with
low ability should have little chance of guessing the correct answer and participants who
have high ability will almost certainly choose the correct answer.

5. Generalisability aspect: Messick (1996) suggested that one of the ways of ensuring the
generalisability of the instrument validity is to develop assessments that represent a mix of
efficient structured exercises broadly tapping multiple aspects of the constructs and open-
ended tasks tapping integral aspects in depth. It depends on the degree of correlation of the
assessed tasks with other tasks representing the construct or aspect of the construct.

Investigating ICT-literacy assessment tool:


Page 96 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

The instrument for this study used the test instrument specification matrix that was applied
in order to ensure that each learning domain based on Gagne’s five learned capabilities was
included in the ICT-literacy TBA instrument. The test instrument specification matrix is
useful to ensure every level of learning domain was tested. Furthermore, all ICT-literacy
indicators were included in the TBA instrument. Correlation between each task was
confirmed using the Quest estimate.

6. Consequential aspect: The consequential aspect of instrument validity looked at how the
intended and unintended consequences of testing informed decisions and our use of the
instrument. For this study, the intended and unintended consequences are shown below
(Table 4.4).

Table 4.4. Intended and unintended consequences of the ICT-literacy TBA instrument

Intended Unintended
• Trainee teachers’ ICT weaknesses and • The tested areas of the TBA instrument
strengths were identified; they value the may determine what is addressed in the
input, and make improvements before the lecture room of the teacher training
end of their teacher training program. program.
• Trainee teachers are rewarded for having • Due to the ‘free-style’ type of testing of
excellent results (salary, promotion, the TBA instrument, trainee teachers
awards, etc.). might not take the test seriously.
• Trainee teachers are given specific • Since the location of the test is in an open
workshops or training by their respective computer lab, the score might not
schools, providing support and assistance represent each trainee teacher’s real
in certain ICT skills and knowledge areas ability, as it would be very easy for them
where they are weak. to copy others.
• Trainee teachers perceive the test as a
vehicle for change.

4.7.2 Reliability
Reliability, on the other hand, is concerned with the degree of fit between theory, construct and
data (Cohen 2007). It assesses the consistency of a measuring instrument. In terms of research
methodology, reliability is associated with consistency, stability and replicability over time, over
instruments and over groups of respondents, and is concerned with precision and accuracy
(Sekaran 2002; Cohen 2007). A reliable instrument should produce similar data from similar
respondents over time. This means that, for example, if a test and then a re-test were carried out
within an appropriate time span on a research-based survey, then similar results would be
obtained.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 97
Chapter-4 : Design and Methodology

As mentioned before, Phase-2 of this research involved the Delphi technique (Delphi-1). A list
of identified indicators and their expected skills were put together in the form of open-ended
questionnaires and the PoE members were expected to rate each of the indicators on a four-point
Likert scale. Below is the procedure undertaken to ensure the validity and reliability of the draft
TBA instrument:
• the list of indicators (LI) and the Delphi rounds timetable had been revised by the
researcher’s supervisor;
• there was sufficient planning and preparation to ensure the clarity of the questions,
choosing the right wording, making short and precise questions, and in logical sequence;
• the Delphi technique applied requires the researcher to produce a summary of the findings
for each round and re-submit the summary back to the experts, together with their answers
in that round. This allows the experts to review not only their feedback, but also that of other
experts; and
• because the PoE members were anonymous, fears of potential repercussions and
embarrassment were removed and no single individual needed to commit themselves
publicly to a particular view until after the alternatives had been publicly stated.

Next, the draft TBA instrument was subjected to another round of the Delphi technique (Delphi-
2) with the same PoE members. After reaching a consensus on the task-based questions and the
instrument design, the instrument was ready to be pilot tested.

Using the Rasch IRT model, the reliability of estimate showed how well the test-items separated
the participants’ performance into those with a higher ability on the one hand, and a lower ability
on the other. This result can be compared to the researcher’s intentions and to see whether it
confirms the researcher’s expectations concerning the test-items (Wright & Masters 1982).
Examination of model fit could also provide information about how justifiable it was to measure
the underlying construct of each ICT-literacy indicator with the particular set of test-items
(Wilson 1992).

After amendments were made to the draft TBA instrument, a further instrument testing process
took place. As mentioned before, screen capture software was used to capture the screen activity
of every trainee teacher. This was to ensure that the researcher had back-up data in case
something happened to the original data.

Investigating ICT-literacy assessment tool:


Page 98 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-4 : Design and Methodology

4.8 Ethical Issues

In this study, and as mentioned earlier, the data collection involved seven PoE members and
trainee teachers. For this type of data collection, ethical issues such as confidentiality, trust and
informed consent were considered, to avoid any harm to the participants (Creswell 2003). In
fact, it is compulsory that all researchers seek such approval from the university’s Ethics
Committee prior to conducting this type of research.

For Phase-2 and Phase-3 of this study, a letter of consent was given to all participants (i.e. PoE
members and trainee teachers), explaining the purpose of the study (refer to Appendix B). The
participants were assured that the data would be confidential and would not be misused. The
experts and trainee teachers’ names would always remain anonymous. Where necessary, they
were referred to by codes. Participation was voluntary; they were advised that they had the right
to withdraw their participation at any time. This study only involved the use of identifiable or
potentially identifiable information (research ID code). The identifiable information will never
be disclosed in any way. This study did not collect sensitive data such as intellectual property or
information protected by copyright.

Concerning the experts’ judgement on the ICT-literacy development indicator phases, before
the Delphi rounds began, the researcher personally met with each expert separately. The reason
for these meetings was to develop rapport and thereby to answer any questions that they may
have had regarding what was expected of them. The researcher believed that in order to get
honest information from the experts, the experts must first trust the researcher (Dundon & Ryan
2010). The whole data collection process was conducted via emails with the researcher.
Therefore, it was explained to each person that by agreeing to participate, they would also be
giving consent to the trans-border data transfer. Reasonable steps were taken to ensure that the
information transferred was not to be held, used or disclosed inconsistently with the university’s
ethics rules and the Information Privacy Principles as stated in the Information Privacy Act
2000.

Likewise, in the final instrument testing phase with the trainee teachers (Phase-3 sub-section
4.3.3), with permission from the classroom lecturer, the researcher approached each trainee and
explained the aims of the study and briefly described the task-based tests that each participant
would be requested to complete. A time schedule was then distributed in the classroom for
participants to choose which time slot they were free to participate in. It was also made known
that walk-ins were also welcomed. Consequently, the details pertaining to the study were
explained to each walk-in participant, and the University’s consent letters were given to them
prior to their commencement of the ICT-literacy assessment.
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 99
Chapter-4 : Design and Methodology

As mentioned before, during the trainee teacher assessment, Screen2exe program was used to
capture each participant’s computer screen activity. This program recorded screen activity only;
no audio or video recording was conducted during this research study. Each recording was
saved as the research ID code given to each participant. The draft TBA instrument involved
questions that required the participant to send an email to the researcher. This activity was part
of the assessment for ICT skills in using electronic mail. The researcher was not required to
respond to any part of this email. Participants were asked to use their university email provider
that employs their student ID as their username. This means that the researcher did not have
access to information that could link their student ID to the actual participant.

4.9 Chapter-4 Summary

This chapter explained the research design and methodological approach/techniques applied in
this study. The research design was divided into three phases: 1) preliminary review; 2) expert
judgement on ICT-literacy indicators; and 3) instrument validation and testing (see Figure 4.1).
Data collection and data analysis techniques for each of the three phases were discussed. The
reliability and validity aspect of the ICT-literacy TBA instrument development were also
highlighted, and concerns with regard to ethical issues were explained. The next chapter will
further discuss Phase-2 in more detail.

Investigating ICT-literacy assessment tool:


Page 100 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

5
Data Analysis & Findings
Phase-2: Expert judgement on ICT-literacy indicators

5.1 Overview

The previous chapter established the mixed methods research design, which in turn informed the
choice of qualitative/quantitative methodologies required to conduct the research. This chapter
is dedicated to analysing and discussing the results arising from the PoE data (Phase-2). The
outcome from this phase facilitates the development of the ICT-literacy TBA instrument in
Phase-3 (Chapter-6).

Figure 5.1. Phase-2 of the research design

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

This chapter is organised as follows:


• Selecting members for PoE;
• Delphi-1 processes;
• Delphi-2 processes; and
• Chapter-5 summary.

5.2 PoE Members Data

As shown before, the qualitative/quantitative data collection processes for this research were
divided into three phases (see section 4.6), where it was shown that the Delphi technique
commenced in Phase-2. This well-known qualitative technique is currently used in the USA for
technological forecasting (Linstone & Turoff 2002). It is also considered effective in other
contexts that require judgemental information, including: normative forecasts; determining
values and preferences; simulated and real decision-making; identification of potential measures
that might be taken to explain a given problem; and assessing instrument measures concerning
their feasibility, desirability and effectiveness (Zikmund, Babin, Carr & Griffin 2010).

The two Delphi interactions were conducted in Phase-2 (Delphi-1 and Delphi-2), with each
iteration involving two rounds (Figure 5.1). The Delphi-1 interaction involved the PoE members
evaluating the list of ICT-literacy indicators that were identified from existing research on ICT-
literacy and the MSS computer skills and ICT knowledge requirements (see Chapter-3, sections
3.2 and 3.3). Then, in Delphi-2, the same PoE members evaluated the suitability of the series of
tasks that were developed by the researcher (based on the PoE agreed indicators in Delphi-1).
Altogether there were nine steps conducted in both Delphi rounds, with four steps in Delphi-1
and another four steps in Delphi-2. However, the first research activity shown in Figure 4.1 as
Step-1 was the selection of suitable PoE members.

5.2.1 Step-1: Selecting invited members for the PoE

In order to obtain a more accurate view and understanding of the computer skills and ICT
knowledge requirements of the MSS, the researcher decided that the PoE members should be
represented by: teachers from the current MSS; academics from the field of educational
technology; and consultants from the Multimedia Development Corporation (MDeC) of
Malaysia. The panel selection was important as the study called for a group of experts who can
professionally deliberate on the topics of educational technology and the Malaysian school
system. Invitations to participate in this study were distributed, along with a brief description of
the research, and a brief description of the data collection process. There was a 35% acceptance

Investigating ICT-literacy assessment tool:


Page 102 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

rate where out of 20 invitations sent, seven experts agreed to participate (see Chapter-4 section
4.8).

5.2.2 Step-2 to Step-5: Delphi-1

Earlier, twenty-four ICT-literacy indicators were identified (see Chapter 3, sections 3.2 and 3.3)
(Smart School Project Team 1997; McNaught 2006; Katz & Macklin 2007; Markauskaite 2007;
ETS.org 2008; ACRL 2009). This list of ICT-literacy indicators was revised in order to avoid
redundancy (Table 5.1).
Table 5.1. List of identified ICT-literacy indicators

Identified ICT-literacy indicators


1. Understand the main computer applications
2. Ability to search, collect and evaluate electronic information
3. Ability to use appropriate aids to produce, present or understand complex
information
4. Ability to access and search a website, and use Internet-based services
5. Ability to use ICT to support critical thinking, creativity and innovation
in different contexts
6. Information and media literacy
7. High productivity
8. Life-long learning
9. Life skills
10. Plan/define
11. Access
12. Integrate
13. Evaluate
14. Manage
15. Create
16. Communicate/collaborate
17. Reflect
18. Ability to explain ICT-related hardware
19. Handling of ICT hardware
20. Ability to identify ICT hardware/software problems
21. Ability to use software for teaching and learning
22. Ability to use word processing and presentation software
23. Ability to use the Internet for finding information/material
24. Ability to use the Internet for communication.

Following a review of the above-mentioned list, some of the indicators that were considered
either redundant or not ICT-based were deleted; they involved:
• understanding the main computer applications;
• ability to access and search a website, and use Internet-based services;
• information and media literacy;
• high productivity;
• life-long learning;
• life skills;
• ability to use software for teaching and learning;
• ability to use word processing and presentation software;
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 103
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

• ability to use the Internet for finding information/material; and


• ability to use the Internet for communication.

Consequently, the reviewed ICT-literacy indicators were reduced from 24 (Table 5.1) to 14
(Table 5.2).

Table 5.2. List of reviewed ICT-literacy indicators

Identified ICT-literacy indicators: Reviewed


1. Ability to search, collect and evaluate electronic information
2. Ability to use appropriate aids to produce, present or understand complex
information
3. Ability to use ICT to support critical thinking, creativity and innovation
in a different context
4. Plan/define
5. Access
6. Integrate
7. Evaluate
8. Manage
9. Create
10. Communicate/collaborate
11. Reflect
12. Ability to explain ICT-related hardware
13. Handling of ICT hardware
14. Ability to identify ICT hardware/software problems.

Further refinement was made to the reviewed ICT-literacy indicators’ listing. The ability to
search, collect and evaluate electronic information was changed to navigation and search. The
second indicator (Table 5.2) was changed to production and analysis. Moreover, reflecting
upon the PoE response, it was decided that the third indicator (ability to use ICT to support
critical thinking, creativity and innovation in a different context) should not appear as one
indicator on its own, as the critical thinking, creativity and innovation skills were to be included
throughout the proposed ICT-literacy TBA instrument. Similarly, the last three indicators
(indicator 12, 13 and 14 of Table 5.2) were changed to understanding and handling ICT tools.
Thus the final refined ICT-literacy indicators were as listed in Table 5.3 below.

Table 5.3. List of refined ICT-literacy indicators

Identified ICT-literacy indicators: Refined


1. Navigation and search
2. Production and analysis
3. Plan/define
4. Access
5. Integrate
6. Evaluate
7. Manage
8. Create
9. Communicate/collaborate
10. Reflect
11. Understanding and handling ICT tools.

Investigating ICT-literacy assessment tool:


Page 104 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

A description of the expected activities that are involved with each of the ICT-literacy
indicators were compiled and adapted from International ICT Literacy Panel (2002),
Markauskaite (2007), ANZIIL (2008), and ARCL (2009), as shown below in Table 5.4.
Table 5.4. List of ICT-literacy indicators and their activities

ICT-literacy Activities
indicators
1. Navigation and 1. When trainee teachers are expected to find screen-based information from the
search Internet, they are able to:
• select and use appropriate search engines;
• use the appropriate searching keywords;
• construct complex queries; and
• use advanced search features.

2. The trainee teachers are also able to upload and download digital
information, and understand the concept and use of Bookmark function.

2. Production and 1. Apart from the basic ICT tools, trainee teachers are also able to use advanced
analysis ICT tools (e.g. advanced features of word processing, spreadsheet, database
and presentation software) when the situation calls for it.

2. The trainee teachers understand the different features of each type of


software and the type of document each software application will produce.

3. Plan/define 1. When given a problem or task that involves ICT, trainee teachers are able to
determine the nature and extent of the information needed to solve the
problem.

2. When the problem involves a cognitive task, trainee teachers are able to plan
a solution e.g. identify key concepts of the problem and develop potential
strategies for a solution without difficulty.

4. Access 1. In a situation where the trainee teachers have to collect and/or retrieve digital
information, they are able to:
• obtain the required information from various digital media and sources; and
• independently select the appropriate software and ICT tools that suit the
required needs.

5. Integrate 1. In a situation where trainee teachers manage to gather several bits of


information from different digital media sources and computer applications,
they are able to interpret each of them effortlessly.

This means: by using the appropriate digital tools, trainee teachers are able to
synthesise, summarise, compare, and contrast the various bits of information
from multiple sources.

6. Evaluate 1. With screen-based information, trainee teachers are able to judge and
evaluate the degree to which digital information satisfies the needs of a
given task, which includes determining:
• the authority of the source;
• bias;
• timeliness; and
• relevance.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 105
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

ICT-literacy Activities
indicators
7. Manage 1. When asked to organise, classify and store information in a computer, trainee
teachers are able to use suitable digital tools that can be applied to an
existing classification information scheme to store information, and its
source.

8. Create 1. When given an ICT-related problem or task, trainee teachers are able to
apply new information to construct new concepts and create new
understandings.

2. Trainee teachers are able to adapt, apply, design, or construct information in


digital environments, which include:
• graphics;
• documents;
• presentations; and
• web pages.

Using their skills with ICT tools, trainee teachers are able to design suitable
teaching and learning tools with cognitively stimulating activities.

9. Communicate 1. Trainee teachers are able to collaborate and communicate with various
/collaborate people in a variety of contexts and also work in a team.

2. In their teaching, trainee teachers easily adapt and use various learning
contexts, such as through discussion forums, appropriate chat rooms and e-
groups.

3. Disseminating information relevant to a particular audience in an effective


digital format will not be a strenuous task to achieve for the trainee teachers.

10. Reflect 1. When using digital sources, trainee teachers are able to adhere to copyright
rules and manage to properly cite and give due credit to the author of the
source.

2. Having produced the final digital product, trainee teachers are able to
critically judge and reflect on:
• the outcome; and
• the problem-solving strategies employed in the process.

11. Understanding 1. In a situation where trainee teachers are required to:


and handling • operate a computer;
ICT tools • use emails;
• manage files;
• use basic teaching and learning computer-based modules;
• use basic word processing applications; and
• they can utilise them without difficulty.

For Delphi-1, each PoE was given a questionnaire with the ICT-literacy indicator listing. The
PoE members were asked for a comment on each of the indicators and to recommend whether it
was relevant to ICT-literacy, the MSS ICT environment, and Malaysian trainee teachers. Each
PoE member operated in an independent and anonymous manner. The expected ICT-literacy

Investigating ICT-literacy assessment tool:


Page 106 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

skills and the appropriate context of use for each of the indicators were explained briefly in the
questionnaire.

Each PoE was asked to:


• suggest the level of relevance of each indicator to trainee teachers in Malaysia on a
scale of ‘0: not relevant; 1: fairly relevant; 2: relevant; 3: extremely relevant’;
• provide comments or suggestions for each indicator;
• suggest an appropriate measurement of quality; and
• suggest other indicator(s) (if appropriate).

Based on the Delphi-1 interaction, the PoE member scored all indicators either relevant or
extremely relevant, with a mean score between 2.50 and 3.00 (Table 5.5).

Table 5.5. Mean score for relevance of indicators


No. Indicators Mean
1 Understanding and handling ICT tools 3.00
2 Plan/define 2.75
3 Access 2.75
4 Manage 2.75
5 Create 2.75
6 Communicate/collaborate 2.75
7 Production and analysis 2.75
8 Navigation and search 2.75
9 Integrate 2.50
10 Evaluate 2.50
11 Reflect 2.50
* score 0 = not relevant; 1 = fairly relevant; 2 = relevant; 3 = extremely relevant

The highest mean score was for understanding and handling ICT tools. Seven indicators scored
2.75 mean: plan/define, access, manage, create, communicate/collaborate, production and
analysis and navigation and search. Integrate, evaluate and reflect scored the lowest with
2.50. However, as all indicators scored between relevant and extremely relevant, all indicators
were included in the draft TBA instrument.

One indicator obtained a perfect score with a mean of 3.00 (extremely important). The
understanding and handling ICT tools indicator was expected to score well. Expert-1 believed
that this skill was a must for trainee teachers, if they were to be acknowledged as ICT literate
teachers. Aside from personal purposes, Expert-3 stressed how having basic ICT skills and the
ability to use ICT tools is becoming a norm for teachers and students in ‘smart schools’, and
ICT tools are used in their everyday tasks. Expert-6 trusted that this skill was very basic.
Not only trainee teachers, but all ICT users should acquire these basic skills in order to
progress further, and Expert-2 concurred with this statement. Expert-5 added, "There are
other knowledge[s] and skill[s] that needed to be mastered so that it can be optimally utilised".

Investigating ICT-literacy assessment tool: Page 107


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

The plan/define indicator involved the ability of the trainee teachers to determine the nature and
extent of the information needed to solve a given situation that involved ICT. The trainee
teachers were expected to be able to identify key concepts of the problem and develop potential
strategies for a solution. For this indicator, the PoE members rated a mean score of 2.75. Expert-
1 agreed that this skill was relevant and expected that it could test not only trainee teachers’
computer skills, but also their ICT knowledge. Expert-3 stressed that planning was important in
order to carry out a class lesson successfully. According to Expert-3, "proper planning need [sic]
relevant information and every component of the lesson needs to be identified and defined
clearly". This expertise was necessary especially when it involved computer skills and ICT
knowledge, as trainee teachers need to know how to correctly identify and use the specific ICT
tools to solve each particular problem. Agreeing with Expert-3’s opinion, Expert-5 argued that
planning encouraged the trainee teachers to be aware of the task given to them. They would then
be able to produce relevant and appropriate solutions.

Access was another ICT-literacy indicator with a mean score of 2.75. This indicator implied that
in a situation where trainee teachers have to collect and/or retrieve digital information, they are
able to obtain the required information from various digital media and sources. They are also
expected to be able to independently utilise the appropriate software and ICT tools that suit the
required needs. Expert-1 strongly believed that this is one of the important skills today, which is
known as life-long learning. Expert-1 argued, "Teachers have to be creative enough and
independent in gathering information related to teaching and learning from ICT-based media".
Expert-2, Expert-6, and Expert-7 all agreed that this skill was required. In addition, Expert-3
stressed that digital information is becoming the major source of reference in schools today.
Trainee teachers need to be able to access relevant digital information in various formats to suit
the needs of the learning situation.

Next, trainee teachers were expected to be able to organise, classify and store information and
its sources in a computer using an existing classification information scheme. The manage
indicator was scored 2.75 (mean) by the PoE members. Almost all the experts agreed that this
indicator was relevant for assessment as part of the ICT skills that trainee teachers in Malaysia
should have. Expert-2 stated that it was also important for the trainee teachers to be able to
appropriately classify their teaching materials as general or confidential. Expert-3 further
explained that schools in Malaysia are currently equipped with content management systems for
storing and managing digital content. Thus trainee teachers must be able to store digital
information in the required format and size when needed. On the other hand, Expert-5 believed

Investigating ICT-literacy assessment tool:


Page 108 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

that this skill is merely a basic form of technical know-how, so not much emphasis is needed on
this skill.

The instructional strategy behind the create indicator assumed that when given an ICT-related
problem or task, trainee teachers were able to apply new information to construct new concepts
and create new understandings. They were able to adapt, apply, design or construct information
in digital environments, which included graphics, documents, presentations and web pages.
They were also able to design suitable teaching and learning tools with cognitively stimulating
activities. Scored 2.75 by the PoE members, this indicator proved to be very relevant to
Malaysian trainee teachers. Feedback from the PoE members included:

This is extremely relevant because those who are ICT experts are not only expert in using
ICT tools but also capable to innovate new ICT-based approaches or solutions or teaching
and learning aids. (Expert-1)

Using the ICT tools trainee teachers can deliver their material and most of all using their
creativity in order to make certain that the material will be well understood and interesting
as to maintain a conducive teaching and learning environment. (Expert-2)

Trainee teachers must be able to produce learning materials that facilitate the learning
process. They must be able to use ICT tools to prepare learning activities that would attract
and keep the pupils interested in the lesson. They have to be creative to produce digitally
equivalent flash cards, storyboards, flannel boards, 3-D models and other teaching aids to
stimulate the young minds to think and learn. (Expert-3)

Sufficient training and input should be provided by instructors in order to enhance this
component and it should cater to different disciplines in the classroom. (Expert-5)

Trainee teachers should [sic] able to do this. (Expert-6)

Should include all the multimedia features – animation, sound, music, interactivity,
narrativity [sic], etc. (Expert-7)

The next indicator was communicate/collaborate. For this indicator, trainee teachers must show
that they had the skills and knowledge in using ICT tools to communicate and collaborate with
various people in a variety of contexts. In their teaching, they could effortlessly adapt and use
various learning contexts such as discussion forums, appropriate chat rooms and e-groups. The
trainee teachers also knew how to disseminate information relevant to a particular audience in
an effective digital format. Expert-2, Expert-3, Expert-6 and Expert-7 all agreed that trainee
teachers must be able to use all the available ICT tools to share up-to-date information with their
colleagues and pupils. They must be able to use ICT tools to collaborate and work in a team
without being restricted by time and physical constraints to complete tasks assigned to them.
Trainee teachers must be comfortable with the use of emails, discussion forums and other e-
social platforms such as: podcasting; tweeting; instant messaging; or blogging, to help them in
their tasks.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 109
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

Expert-1 and Expert-5, however, were a bit sceptical. Expert-1 argued that communicate and
collaborate were not directly related to ICT-literacy and that a trainee teacher may be good in
communicating and collaborating but have poor ICT-literacy skills, or vice versa. Expert-1
claimed that most ICT experts were good collaborators in the context of online teamwork and
communication, but not in face-to-face communication. Meanwhile, Expert-5 expressed more
concern on the ethical side of digital communication and collaboration, feeling that ethics should
be highlighted in the new assessment instrument.

The PoE agreed that the trainee teachers need skills and knowledge in production and analysis.
Apart from the basic ICT tools, trainee teachers must be able to use advanced ICT tools such as
advanced features of word processing, spreadsheet, database and also presentation software.
Further, they should be expected to understand the different features of the software and the type
of document each software application will produce. Expert-1 agreed that this skill is an
absolute must for trainee teachers. They must have the basic concepts of certain important ICT
tools and how to manipulate them in appropriate tasks (Expert-2). Besides, the trainee teachers
should already have these skills if they have explored them before (Expert-6). Moreover,
students in ‘smart schools’ are constantly exposed to new ICT tools, thus the trainee teachers
must be able to keep up (Expert-3). Expert-5 suggested that this skill could be proposed as a part
of teachers’ continuous professional development (CPD) program.

The last indicator that scored 2.75 mean was navigation and search. For this indicator, in a
situation where trainee teachers were expected to find information from the Internet, they were
able to select and use appropriate search engines, use appropriate searching keywords, construct
complex queries and also use advanced search features. The trainee teachers were also expected
to be able to upload and download digital information, and to understand the concept and use of
the ‘bookmark’ function in Internet browsers. All the experts agreed on the significant role of
this skill for trainee teachers. However, Expert-1 was concerned that some teachers might
consider that ICT-literacy and Internet literacy were two different skills. Expert-5 felt that
ongoing training should be provided to the trainee teachers and in-service teachers on the latest
strategies and methods for Internet navigation and search.

The final three ICT-literacy indicators scored 2.50 mean, covering the skills to integrate,
evaluate and reflect. Skills to integrate apply in a situation where trainee teachers manage to
gather several bits of information from different digital media, sources and computer
applications, and they are able to interpret each of them effortlessly using the appropriate digital
tools. They are also able to synthesise, summarise, compare, and contrast the various bits of

Investigating ICT-literacy assessment tool:


Page 110 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

information from those multiple sources. To prepare for their instructional sessions, apart from
the textbooks, the trainee teachers might use the Internet for additional information. This
additional screen-based information might be in the form of: images; videos; spreadsheets; pdf
documents; word documents or in html format. The trainee teachers must know how to take
information from these different digital formats and create suitable instructional strategies for
their classrooms. Expert-1 was doubtful of trainee teachers’ ability to use ICT tools to
synthesise, summarise, compare and contrast. Expert-1 believed that they might have those
skills but have poor ICT-literacy to be able to use ICT tools to accomplish the task, or vice
versa. Nevertheless, other experts indicated that by having this skill, trainee teachers would
have the ability to differentiate appropriate ICT tools for a given task and make appropriate
modifications, making the information suitable for the targeted audience.

Regarding the evaluate indicator, trainee teachers were expected to be able to judge and
evaluate the degree to which digital information satisfies the needs of a given task, which
includes determining the authority of the source, bias, timeliness and relevance (Meriam Library
CSU Chico 2010; SDSU Library & Information Access 2011). All the experts believed that this
indicator was important. The reasons given included: to ascertain that the information that
trainee teachers have was suitable for their students’ age/level; to have the skills to sift through
the plethora of digital information and identify the most authoritative; and to be able to
differentiate between facts and half-truths.

The final indicator was reflect. Trainee teachers should be able to adhere to copyright rules and
manage to properly cite and give due credit to the author of the source. Having produced the
final digital product, trainee teachers should also be able to critically judge and reflect on the
outcome and problem-solving strategies employed in the process. With the exception of Expert-
3, other experts felt that trainee teachers might not be aware that the copyright rules also applied
to the digital world. By contrast, Expert-3 felt that by assessing this skill it would make the
trainee teachers aware of the need to acknowledge and respect material produced by other
people. Expert-3 also thought that the teaching and learning reflection exercise that is currently
employed by teachers in schools is a good exercise for the trainee teachers to critically judge or
reflect in regard to a digital product produced. Though the experts’ opinion for this indicator
seemed uncertain, the mean score of 2.50 was still considered acceptable. Consequently, this
indicator was included for the next Delphi phase (Delphi-2).

Expert-3 proposed that another ICT-literacy indicator should be added. Unlike previous
research studies, the PoE members agreed that the ability to use ICT tools to assess must be
included as one of the important indicators for ICT-literacy. Previous research has not included
the ability to
Investigating ICT-literacy assessment tool: Page 111
Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

assess student learning as one of the computer skills for ICT-literacy (see International ICT
Literacy Panel 2002; Wong 2002; Katz & Macklin 2007; Markauskaite 2007). This omission
possibly stems from the fact that none of the instruments were developed specifically for trainee
teachers. In the second round of Delphi-1, the assess indicator was having a mean score of 2.63.
According to Expert-1 and Expert-3:

this skill is relevant because those who are expert in ICT may be good in using ICT tools
for assessment purposes rather than teaching and learning purposes. (Expert-1); and

schools are being equipped with on-line based assessment systems. Trainee teachers must
be able to use these tools to assess student learning in schools. (Expert-3).

Aside from substantiating relevant ICT-literacy indicators for trainee teachers, the PoE was also
required to suggest an appropriate measurement of the quality for each of the indicators. Almost
all of the experts agreed a task-based assessment would be more suitable. For example: for the
navigation and search indicator, Expert-1 suggested that the trainee teachers could be asked to
perform certain tasks using any browser. Expert-7 recommended to, "Ask them to show this skill
if the researcher has time to participate or observe their actual activities example [sic] in
classroom". Another example was for the integrate indicator, the experts proposed:

give several resources on the same ICT concepts/terms and ask the teachers to come out
with their own definition of the term by referring to the given resources. (Expert-1);

give them a task to gather info from all the sources that are available and present it [sic]
in the most suitable manner using a proper computer application. (Expert-2); and

ask the trainee teachers to prepare a slide presentation with charts, tables, pictures, sound
or movie clips to help explain a lesson concept. (Expert-3).

For the handling and utilising ICT tools indicator, the experts suggested:

don’t just ask them whether they have used the tools before. Give problems related to using
email, scanner, printer … for example. (Expert-1);

trainee teachers are to be given multiple tasks from how to operate a computer, email
usage, etc., and how they utilise them without difficulty. (Expert-2); and

they have to show you in real situations. Relevant criteria could be a guide. (Expert-7).

The findings also show that when designing an instrument to test skills in handling and
utilising ICT tools, the tasks must not be limited to computer applications; they must also
include other ICT devices, such as: digital camera; digital video; scanner; printer and digital
projector. These ICT devices are among the ICT-based teaching aids currently provided by the
Malaysian government to every school in Malaysia. Also, instead of simply telling the trainee
teachers what to do and what tools or computer applications to use (see Wong 2002), the TBA

Investigating ICT-literacy assessment tool:


Page 112 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

instrument provides the trainee teachers with an authentic educational ICT-related task, that
allows them to perform the task with whatever tools or computer applications that they think are
suitable. This way, the task will not only test their declarative and procedural knowledge, it also
tests their meta-cognitive knowledge (see section 4.6.2).

Different levels of knowledge dimensions were tested in the ICT-literacy TBA instrument:
declarative knowledge (verbal information skills and intellectual skills); procedural knowledge
(intellectual skills and cognitive strategy); and meta-cognitive knowledge (see Gagne 2000;
McKay 2000; Anderson et al. 2001; Krathwohl 2002). Declarative knowledge includes facts,
terminology, or elements that one must know or be familiar with in order to understand or solve
a problem. Procedural knowledge entails the additional knowledge that one has, which may help
to do something specific in a discipline, subject or area of study; one is able to integrate
knowledge in a new situation, recognise unstated assumptions and know the ‘how’. And finally,
meta-cognitive knowledge describes having a strategic or reflective knowledge about how to go
about solving problems, or the ability to ‘think about thinking’.

These preliminary findings also verified the research expectation for the need to develop a new
ICT-literacy assessment instrument conforming to the needs of trainee teachers and also to
utilise a task-based assessment method. Previously, many research studies have used self-
assessment (or self-efficacy) to evaluate performance in using computer or ICT tools (see, for
example, Wong 2002; Markauskaite 2007). In 1989, guided by Bandura’s self-efficacy theory
and Schunck’s model of classroom learning, a computer self-efficacy scale (CSE) was developed
by Murphy, Coover, and Owen (1989) to measure capability regarding specific computer-related
knowledge and skills. They argued that self-efficacy could be reliably measured and used to
assess a combination of effect, cognition and performance. Nonetheless, it was suggested that
when assessing skills and cognitive ability, people are inclined to underrate or overrate
themselves (Boud & Falchikov 1989; Ballantine, McCourt Larres & Oyelere 2007). This type of
self-assessment outcome is more apparent between high achievers and low achievers. High
achievers tend to underrate themselves and low achievers overrate their skills.

5.2.3 Delphi-1 conclusions

Eleven ICT-literacy indicators had been previously identified from earlier studies in the ICT-
literacy (Phase-1). Another ICT-literacy indicator was suggested by one of the PoE members
during round-1 of Delphi-1 (assess indicator). The twelve ICT-literacy indicators were then
represented to the chosen PoE to be evaluated in round-2 of the Delphi-1 interaction. The PoE
assessed each indicator based on its suitability to be used in an MSS environment and on

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 113
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

Malaysian trainee teachers. Based on the findings for Delphi-1, all ICT-literacy indicators were
considered suitable. All indicators were scored between 2.00 (relevant) and 3.00 (extremely
relevant) (see Table 5.5). The lowest mean score was 2.50 for indicators integrate, evaluate and
reflect, while the highest mean score was 3.00 for understanding and handling ICT tools
indicator (see the scoring scheme for Delphi-1 under table of Table 5.5).

The next step was to use these findings to ensure that all indicators (1 to 12) were incorporated
in the draft TBA instrument. In doing that, the researcher knew to adhere to each indicator
scoring weight. For example: the indicator understanding and handling ICT tools carried more
scoring weight than other indicators, thus the number of task items for this indicator was more
than other indicators. Whereas the integrate, evaluate and reflect indicators received a lower
weighting, hence the task items for these indicators was smaller in number. As a way of
ensuring that this was reflected in the draft TBA instrument, the test instrument specification
matrix was used (see Table 5.6). As explained before in Chapter-4 (section 4.6.2), this matrix is
an instructional design tool, which placed learning tasks and instructional objectives in a skill
development matrix. It was adapted from McKay (2000), who utilised this type of matrix as the
test development blueprint for her test instrumentation.

Investigating ICT-literacy assessment tool:


Page 114 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

Table 5.6. Test instrument specification matrix – draft TBA instrument

Instructional Objectives: ICT-literacy

Declarative Procedural Meta-


cognitive

Band-A Band-B Band-C Band-D Band-E Band-F


Verbal Intellectual Intellectual Cognitive Cognitive Meta-cognitive
Information skill skill strategy strategy knowledge
skill

Concrete Basic rule Higher order Identify sub- Knowing the Strategic or
concept rule tasks ‘how’ reflective
Discriminates knowledge
Knows basic Problem- Recognises Recall simple about how to go
terms Understands solving unstated prerequisite about solving
concepts & assumptions rules & problems,
Knows ‘that’ principles Applies concepts cognitive tasks,
concepts & to include
principles to Integrates contextual and
new learning from conditional
situations different areas knowledge and
into a plan for knowledge of
solving a self
problem
ICT-literacy Total:
indicators
• Evaluate 11, 12 2
• Integrate 14 1
• Internet navigation
15 13.1 13.2 3
& search
• Production and 7, 20 2
analysis
• Access 8 10 17 3
• Reflect 18 1
• Communicate/
1.1 1.2 1.3 3
collaborate
• Assess 6.1 6.2 2
• Create 16.1 16.2 2
• Plan/define 9 1
• Manage 19 1
• Understanding and
2, 3, 4, 5.1 21.1 5.2, 21.2 7
handling ICT tools
Total: 1 7 5 8 4 3 28
Adapted from McKay (2000)

Table 5.6 shows the distribution of tasks in the draft TBA instrument across the ICT-literacy
indicators (see Table 5.5). Twenty-eight ICT-based tasks were proposed for the draft TBA
instrument, with understanding and handling ICT tools indicator having the most tasks (seven
tasks). Other ICT-literacy indicators had between two to three tasks, while integrate, manage,
plan/define and reflect indicators had only one task each.

As mentioned previously in this chapter the tasks in the draft TBA instrument also examined the
trainee teachers’ ability across three instructional objectives: declarative knowledge, procedural
knowledge and meta-cognitive knowledge (Gagne 2000; McKay 2000; Anderson et al. 2001). It

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 115
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

was anticipated that this would assist the researcher in identifying the trainee teachers’ level of
knowledge and instructional ability in using ICT.

5.2.4 Step-6 to Step-9: Delphi-2

The TBA evaluation (Step-6) (see Figure 5.1): involved the draft TBA instrument that was
developed by the researcher based on the agreed ICT-literacy indicators identified earlier by the
PoE members in Delphi-1. The draft TBA instrument (see Appendix C) was sent to each of the
PoE members to evaluate the suitability of the questions that would eventually be given by the
researcher to Malaysian trainee teachers in Phase-2. The PoE members were also required to
give their opinion on whether the questions did represent what they expected from the ICT-
literacy indicators that they had previously agreed to. These opinions formed the basis of a
collective feedback report that was sent back to each PoE member, along with a copy of their
own feedback summarisation (Step-7).

Next TBA re-evaluation (Step-8): the PoE members reviewed their own answers in the light of
the comments made by all other PoE members, returning their new feedback forms to the
researcher. Following the validation of the draft TBA instrument, the PoE revised draft TBA
instrument was later tested to the trainee teachers in the next phase of this study.

For the tasks in the draft TBA instrument, a simple task background was added to the tool to
provide the trainee teachers with a sense of understanding of the tasks they are about to perform.
Aside from that, a list of props for each task was also given (for example: digital camera,
required working files and scanner). These tasks are considered to be normal computer-based
tasks that a teacher in a ‘smart school’ environment is expected to perform. However, some of
the tasks will not be as clear-cut, with no step-by-step instructions telling them what to do next.
Instead, the trainee teachers were required to figure out a solution for themselves on what would
be the most suitable computer application to use (indicator plan/define), or what other
information they need before they could accomplish the given task (indicator plan/define,
access and integrate).

For each of the given tasks, the trainee teachers were left to freely choose whichever computer
application they were comfortable with in order to perform the tasks. For example, in a task
where the trainee teachers were asked to resize a picture of a potted plant, they were allowed to
use any picture resizing applications available in the computer (e.g. Paint, Adobe Photoshop,

Investigating ICT-literacy assessment tool:


Page 116 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

Microsoft Picture Manager, etc.). They could do this as long as they were able to produce a
correctly resized picture of the potted plant; they were marked as able to do the task.

After two rounds of the Delphi-2 interaction (TBA evaluation and feedback email iterations
with the researcher), the PoE members reached a consensus. They agreed that the preliminary
TBA tool was suitable to assess trainee teachers in Malaysia, following some minor
amendments. The draft TBA instrument was developed with five ICT-based tasks (see Appendix
C). The mean score of each task of the preliminary TBA tool is shown in Table 5.7.

Table 5.7. Mean score for each task of the draft TBA instrument
Task
Task to-do list for the TBA tool Indicator tested Mean
no.
• Take a picture Understanding and handling ICT
• Shoot short video tools
• Scan document Access
• Prepare T&L material Plan/define
• Internet searching Manage
1 • Evaluate suitable material from the Internet Navigation & search 2.71
Evaluate
• Bookmarking
Create
• Email and the use of carbon copy
Integrate
Production and analysis
Reflect
• Calculate total marks and percentage using
spreadsheet application Assess
2 2.57
• Rank the marks in ascending order Production and analysis
• Prepare a graph
• Add new record in a database
3 Production and analysis 2.00
• Query a record in a database
• Register as a member for a forum
4 Communicate/collaborate 2.71
• Post feedback to the correct thread
• Edit margin, header & footer and page
number of a MS Word document Understanding and handling ICT
5 3.00
• Create table of contents using MS Word tools
function
* score 0 = not relevant; 1 = fairly relevant; 2 = relevant; 3 = extremely relevant

Task-1:
The first task requires the trainee teachers to take a picture and shoot a short video using a
digital camera provided. They were also expected to use a scanner to scan a given document.
With these three media materials, they were asked to prepare one suitable technological
classroom instructional strategy that incorporates all three media (computer, scanner and digital
camera). For the instructional strategy content, the trainee teachers needed to conduct an Internet
search on a given topic (photosynthesis), to bookmark the website and provide reasons for
choosing to use the information from that website on a separate document provided (known as

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 117
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

form-A). Finally, the trainee teachers were to send the document (form-A) to the
researcher via electronic mail.

All seven PoE members agreed that Task-1 was suitable for this ICT-literacy assessment tool.
The task reflects what was expected from the indicators. Expert-3 stated that "The process in
task-1 is similar to some of the science process [sic] in Malaysian secondary schools". Expert-5
agreed that the task accurately reflects the indicators. However, Expert-6 argued that a section of
the task where trainee teachers were asked to resize the picture to a required size before inserting
it in their teaching aid is not appropriate as it requires higher level ICT skills. Expert-6 claimed
that trainee teachers in Malaysia are usually users and not creators. It was therefore thought that
the trainee teachers would mostly only be familiar with lower level ICT skills. However, after
the second Delphi round, Expert-6 agreed with other PoE members’ comments and feedback,
and suggested to the researcher to include this task in the draft TBA instrument. Yet be that as it
may, the final decision for including this task in the final ICT-literacy TBA instrument must
reflect the outcome of the pilot study (the next phase) into consideration. The researcher also
decided that Task-1 was too long and had too many different things going on at once. Therefore,
to avoid overwhelming the participants with the seemingly never-ending subtasks, Task-1 was
divided into three parts: organising the media materials; navigating and searching the Internet;
and developing the teaching aid.

Task-2:
The trainee teachers were given a spreadsheet file that contained a list of fictitious student
names and their exam marks. They were required to: calculate the total marks and percentage
for each student; rank them in an ascending order; and then prepare a graph that shows the total
number of students that achieved poor, below average, average, above average and excellent
results. They were allowed to use other calculating methods (for example, a calculator) if they
were not comfortable with using the functions in the spreadsheet application. The PoE members
agreed that this task should be included in the TBA tool.

Expert-1 suggested that "This will help the trainee teachers in preparing suitable reports and
analysis which is expected of them, when needed". Expert-6 also agreed that Task-2 is suitable
but reminded the researcher to be cautious because in some schools teachers are not expected to
be able to prepare a graph. Thus it is possible that the trainee teachers may not have this skill.

Investigating ICT-literacy assessment tool:


Page 118 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

Task-3:
This was the least popular task with the PoE members (mean 2.00). Using mock student
information systems that the researcher had developed using MS Access, the trainee teachers
were required to add a new student record to the system. It is a simple database application with
user-friendly buttons provided. This was a simplified version of the actual student information
system that is currently being used in Malaysian schools. However, the researcher decided to
add a record query skill into the task. As expected, the PoE members reported in their opinion
that the data query skill was too advanced for what is expected of a trainee teacher. It would be
sufficient for the trainee teachers to only know how to use a database application and how to
enter new data in a database. Consequently, the record query skill question was eliminated from
the PoE revised draft TBA instrument.

Task-4:
This task was designed to assess the trainee teachers’ ability to register and post feedback to the
correct discussion thread in an online forum. The researcher had developed a discussion forum
using a free forum host available online (http://ictliteracy.forumotion.net/). All PoE members
agreed that this task was suitable for the final TBA tool. Thus it was decided that this is a
relevant skill for modern teaching methods, and offers the trainee teacher an opportunity to put
forth ideas and suggestions.

Task-5:
The final task was rated by each of the PoE members as extremely relevant. For this task the
trainee teachers were required to edit an MS Word document file by changing the margin,
creating a header and footer and inserting a page number. They were also required to create a
table of contents for the document. Expert-3 believed that this task might be very basic for an
MSS teacher in Malaysia. Other experts concurred that this skill was essential for every teacher
in schools.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 119
Chapter-5 : Data Analysis and Findings – Phase 2 Expert judgement on ICT-literacy indicators

5.3 Chapter-5 Summary

The majority of the original tasks in the draft TBA instrument were retained for the PoE revised
draft TBA instrument. The database record query skill that was declared too advanced by the
PoE was eliminated. It was anticipated that the next research phase (Phase-3: Instrument
validation and testing) should provide a clearer picture of the validity and reliability of the PoE
revised draft TBA instrument. It was expected that this ICT-literacy assessment tool might
accurately assess trainee teachers’ ICT knowledge and competence. Unlike other self-assessment
questionnaires, these tasks compelled the trainee teachers to prove that they were able to
complete the tasks, instead of just saying that they can.

Investigating ICT-literacy assessment tool:


Page 120 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

6
Data Analysis & Findings
Phase-3: Instrument validation and testing

6.1 Overview

The previous chapter discussed the findings from the qualitative research phase, which involved
the Phase-2 data from the two Delphi interactions. The outcome from both Delphi interactions
was a PoE revised draft TBA instrument that was to be tested on the trainee teachers. Chapter-6
reports on the data analysis and the iterative steps taken to confirm that the draft TBA
instrument achieved construct validity and test-item reliability. In order for the instrument to be
accepted by the wider research community, the analysis must effectively show that both
construct validity and test-item reliability were successfully carried out. This chapter begins
with a description of the process involved in designing the ICT-literacy TBA instrument. This
instrument was then tested on trainee teachers, and later the validity and reliability of the
instrument was confirmed.

Figure 6.1. Phase-3 of the research design

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

This chapter is organised into the following sections as follows:


• Designing the TBA instrument;
• Instrument terms and terminologies;
• Pilot testing-1;
• Pilot testing-2;
• Final instrument trial;
• ICT-literacy data diagnostic; and
• Chapter-6 summary.

6.2 Designing the TBA Instrument

Designing this instrument as a task-based instrument was the major focus of this study. The ICT-
literacy TBA instrument therefore needed to be representative of the outcome that emerged from
the Delphi process earlier (see Chapter-5). A test instrument specification matrix was used to
ensure that each ICT-literacy indicator was appropriately represented in the proposed ICT-
literacy TBA instrument.

As previously mentioned in Chapter-5, the draft TBA instrument was designed and distributed to
the panel of experts (PoE) to be evaluated. Five tasks with 18 subtasks were proposed and 17
out of the 18 subtasks were validated by the PoE members as suitable for use by trainee teachers
in Malaysia (Table 6.1).

Investigating ICT-literacy assessment tool:


Page 122 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.1. Tasks and subtasks for draft TBA instrument

Task
Task to-do list for the TBA tool Indicator tested
no.
• Organising the media materials
o Take a picture Understanding and handling ICT
o Edit picture tools
o Shoot short video Access
o Scan document Plan/define
• Navigating and searching the Internet Manage
1 o Internet searching Navigation & search
o Evaluate suitable material from the Evaluate
Internet Create
o Bookmarking Integrate
o Email and the use of carbon copy Production and analysis
• Developing teaching and learning aids Reflect
o Prepare teaching and learning material
• Calculate total marks and percentage using
spreadsheet application Assess
2
• Rank the marks in ascending order Production and analysis
• Prepare a graph
• Add new record in a database
3 Production and analysis
* Query a record in a database  eliminated
• Register as a member for a forum
4 Communicate/collaborate
• Post feedback to the correct thread
• Edit margin, header & footer and page
number of a MS Word document Understanding and handling ICT
5
• Create table of contents using MS Word tools
function
* score 0 = not relevant; 1 = fairly relevant; 2 = relevant; 3 = extremely relevant

Additionally, Expert-6 in the Delphi interactions also suggested that the arrangement of the
tasks should be changed and the draft TBA instrument should begin with an easier task that the
trainee teachers would be more familiar and comfortable with. Expert-6 suggested the
arrangement to be: Task-4, Task-5, Task-2, Task-1 and Task-3. Other PoE members did not
oppose this suggestion. Consequently, the researcher decided to pilot test the draft TBA
instrument using this suggested arrangement (Table 6.2).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 123
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.2. The (PoE suggested) draft TBA instrument’s new arrangement

Task
Task to-do list for the TBA tool Indicator tested
no.
• Register as a member for a forum
1 Communicate/collaborate
• Post feedback to the correct thread
• Edit margin, header & footer and page
number of an MS Word document Understanding and handling ICT
2
• Create a table of contents using MS Word tools
function
• Calculate total marks and percentage using
spreadsheet application Assess
3
• Rank the marks in ascending order Production and analysis
• Prepare a graph
• Organising the media materials
o Take a picture Understanding and handling ICT
o Edit picture tools
o Shoot short video Access
o Scan document Plan/define
• Navigating and searching the Internet Manage
4 o Internet searching Navigation & search
o Evaluate suitable material from the Evaluate
Internet Create
o Bookmarking Integrate
o Email and the use of carbon copy Production and analysis
• Developing teaching and learning aids Reflect
o Prepare teaching and learning material
• Add new record in a database
5 Production and analysis
* Query a record in a database  eliminated
* score 0 = not relevant; 1 = fairly relevant; 2 = relevant; 3 = extremely relevant

6.3 Instrument Terms and Terminologies

Before this thesis progresses any further it is crucial to explain the terminology used by the
researcher during this validation and testing process, while developing the ICT-literacy TBA
instrument (Table 6.3). As a few of these terms are very similar, it is necessary to briefly explain
them here and to depict what each term represents.

Table 6.3. Instrument terms and terminologies

Terms and Representation[s]


terminologies
PoE revised TBA The draft version of the TBA instrument that had been reviewed and agreed by the
instrument PoE members after Delphi-2 interaction.
Task[s] Represented by the tasks agreed by the PoE members during the Delphi interactions.
These tasks were presented to the trainee teachers in the form of fictitious ICT-based
problems that teachers in MSS normally had to overcome.
Subtasks Based on the ICT-based problems given in the tasks, certain subtasks served to
outline the whole task.
Test-item Have similarities with subtasks. This was the ‘evaluation point’, where a participant’s
ICT knowledge and skills were scored. A task can consist of several test-items.
Test-item A form used to score each participant’s ability for each test-item. The score was
evaluation form either dichotomous or in partial credit format.

Investigating ICT-literacy assessment tool:


Page 124 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Terms and Representation[s]


terminologies
TBA instrument During the diagnosis stage, this descriptor helps by listing test-item number, together
test descriptor with its learning domain and competency description, in order to make it easier to
identify trainee teachers’ strengths or weaknesses.

6.4 Pilot Testing-1

Pilot studies can be referred to as feasibility studies, where small-scale versions or trial runs
were conducted to prepare for the major study. They can also be employed as the pre-testing
stage of particular research instruments. A pilot study can help in giving the researcher an
advance warning about where the main research might fail, or whether the proposed methods or
instruments are inappropriate or too complicated (van Teijlingen & Hundley 2001).

To clarify the draft TBA instrument pilot testing-1 process, this section is divided into six
sections: pilot testing-1 preparation; pilot testing-1 preamble; pilot testing-1 observation; pilot
testing-1 findings; instrument review; and re-testing the draft TBA instrument.

6.4.1 Pilot testing-1: Preparation

To ensure this important process could be conducted successfully, arrangements had to be made
prior to the actual pilot testing procedure. For instance: a suitable location in which to conduct
the pilot testing procedure; availability of necessary computer applications and peripherals
involved in the ICT-literacy tasks; and finding suitable and available students who would be
willing to participate.

• Participants: As the target population for this study were trainee teachers in Malaysia, the
Sultan Idris Education University (UPSI) was chosen as the location of participants for this
data collection stage. UPSI is the only university in Malaysia where the sole purpose is to
train teachers.

The QUEST interactive test analysis system (Adams & Khoo 1996) was used to analyse the
pilot testing-1 data, in order to validate and to ensure the reliability of the draft TBA
instrument. Twenty undergraduate trainee teachers who were enrolled in UPSI for semester
two 2010 were invited to participate in this pilot testing. However, only 16 trainee teachers
were willing to participate. Findings from this pilot testing allowed the researcher to
identify test-items that needed to be added, deleted, re-worded, or re-arranged.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 125
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

• Location and computer applications and peripherals: One computer laboratory (30-people
capacity) had been booked for three days. The computer laboratory consists of 30 desktop
computers with Windows 2000 operating system and connected to a local area network
(LAN). The software applications loaded to each of the computers included: Microsoft
Office 2007 for personal productivity software; Adobe Photoshop CS3, Paint and Microsoft
Office Picture Manager for picture editing; and Internet Explorer, Mozilla Firefox and
Google Chrome for web browsing. As some of the tasks for the draft TBA instrument
required a scanner and a digital camera, both items were also borrowed from the
University’s Centre for Educational Technology and Multimedia.

As mentioned before (Chapter 4, section 4.6.3), the sessions were also recorded with a
screen capture program (Screen2exe) that generates visual files of each participant’s
performance. These recordings of screen capture files were matched with the corresponding
participant, based on the questionnaire identification number (see section 4.8: Ethical
issues). Screen2exe is a freeware tool that can capture computer users’ screen activity and
save it as an .exe file. It also captured mouse movement, clicks and even optional audio
comments from the microphone. For this thesis, no audio comments were recorded.

6.4.2 Pilot testing-1: Preamble

The procedure commenced with the researcher giving the participants a ten minutes background
talk about the study and what is expected of them. The participants were assured that their
details and answers were to be kept confidential, and their answers would not influence their
current university grade for the semester. The participants were allowed to complete the tasks in
no particular order. They were also not given any timeframe in which to finish the tasks.

Each participant was allocated a unique research identifier. This code enabled the researcher to
link the participants’ answers in their draft TBA instrument with their screen capture file which
was also saved using the same unique research identifier. To ensure the integrity of participants’
answers, the researcher avoided answering any questions from them concerning the tasks.

6.4.3 Pilot testing-1: Observation

The tasks in the draft TBA instrument involved normal computer applications that were familiar
to the trainee teachers (as was explained by the researcher during the ten minutes background
talk), such as: Microsoft Office applications; the Internet; simple picture editing software; and
also the basic use of a digital camera and scanner. The participants were eager to perform each

Investigating ICT-literacy assessment tool:


Page 126 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

of the tasks. Most of them had a very positive attitude and believed that they could finish
each task correctly and quickly.

However, it took the participants approximately three hours to finish all the required tasks. By
this time, most of them seemed frustrated and showed signs that they wanted to conclude the
procedure quickly. This attitude may have affected their answers for their latter tasks. Some of
the participants also had problems adjusting to the Microsoft Office 2007 software, stating that
"They were more used to the previous Microsoft Office 2003 environment".

6.4.4 Pilot testing-1: Outcome

All 16 participants completed all five tasks in the draft TBA instrument. The responses were
scored dichotomously using a separate evaluation form. Each of the tasks consisted of two or
more test-items that formed the whole task (see Table 6.4 below and Appendix D). These test-
items were evaluated as either able to complete (value 1) or not able to complete (value 0). The
responses were then entered into an electronic data file using Microsoft Excel. This text file was
then prepared for analysis using the Quest interactive test analysis system (Adams & Khoo
1996).
Table 6.4. List of test-items used in the draft TBA instrument

1. Register new account 20. Use natural language search


2. Reply to the correct thread 21. Use Boolean search
3. Post a reply 22. Choose credible websites
4. Set margin correctly 23. Internet navigation – bookmark
5. Set page number correctly 24. Name suitable application for presentation
6. Set document header and footer correctly 25. Basic use of Presentation app (text,
7. Use MS Word features to create TOC background, insert new slide, slide design,
8. Create TOC manually transition)
9. Correct use of basic spreadsheet formula 26. Insert photo
10. Correct use of advanced spreadsheet formula 27. Insert video
11. Correct way of preparing a graph 28. Insert scanned document
12. Take picture 29. Advanced used (hyperlink, insert media, action
13. Shoot video button)
14. Use scanner 30. Proper citation
15. Manage file 31. Manage file
16. Name acceptable picture editing application 32. Add new database information (Basic)
17. Picture resized correctly 33. Email – attachment
18. Know how to evaluate credible website 34. Email – use Carbon Copy
19. Listed acceptable criteria for credible website

As previously mentioned, the Rasch IRT model forms the core of the Quest estimate. Other
statistical tests make assumptions about data. Analysis of variance (ANOVA), for example,
assumes a normal distribution, independent of cases/people’s performance and equal variances
of scores across groups. Based on how the data responded to these assumptions, decisions were
made on whether to accept or reject the null hypothesis. Variables were accepted or rejected

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 127
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

based on how well they fit the data. The Rasch IRT model, however, has conditions and
requirements that must be met first by the data in order to obtain accurate outcomes. The
objective was to obtain data that would fit the model (Andrich 2004; Sick 2010).

• Item characteristic curve (ICC): the Rasch IRT model allows the relationship between the
test item’s difficulty and the person’s (or case) ability to be investigated by employing a
mathematical model that converts both difficulty and ability to the same units (logit). The
relationship between the probability of correct response to the test-item difficulty and person
ability scale is known as the ICC. As such, each test-item would be having its own ICC. The
ICC is used to describe two technical properties: 1) difficulty of the test-item; and 2)
discrimination. Difficulty of the test-item described where the test-item functions along the
person ability scale, while discrimination describes how well a test-item can differentiate
between persons having abilities below the test-item location and those having abilities
above the test-item location. This property essentially reflects the steepness of the ICC in its
middle section. The steeper the curve, the better the test-item can discriminate (Bond & Fox
2007; Sick 2010).

• Test-item fit statistics: the first step before the instrument testing procedure could begin was
to identify test-item fit. One of the key test-item fit statistics is the infit mean square (INFIT
MNSQ). The INFIT MNSQ measures the consistency of fit of the participants to the ICC for
each test-item, with weighted considerations given to those persons close to the 0.5
probability level. The acceptable range of the mean squares statistics for each test-item in
this study was from 0.77 to 1.30 (Adams & Khoo 1996). Values outside this acceptable
range i.e. above 1.30 indicate that these test-items do not discriminate well, and when below
0.77 the test-items provided redundant information. Hence consideration must be given to
excluding those test-items that are outside this range. In an instrument testing procedure, test-
items that do not fit the Rasch model and lie outside the acceptable range must be deleted
from the analysis (Adams & Khoo 1996) (Figure 6.2).

Investigating ICT-literacy assessment tool:


Page 128 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

= Misfit items
Figure 6.2. Test-item fit map

The Quest program requires a control file to be prepared before it can run an estimate process.
The Quest program is initiated by the control file and produces graphical output using ASCII
characters (see Appendix I (1)). The output files were:
• Test-item fit map: shows the test-items and whether they meet the INFIT MNSQ criteria
that are represented by the dotted vertical line. As previously mentioned, the acceptable
range of the mean squares statistics can be from 0.77 to 1.30 (Adams & Khoo 1996).
Values outside this acceptable range i.e. above 1.30 indicate that these test-items do not
discriminate well, and values below 0.77, show that the test-items provided redundant
information. Redundant information means that other test-items are testing the same
construct, while do not discriminate well means that the test-items are not able to
differentiate between participants with low ability and higher ability;
• Variable map: participants (cases) and test-items placed according to their standing on a
single scale, in order to estimate the difficulty levels (threshold values) of test-items, and
to develop a common scale for each data set;
• Summary of test-item estimates and fit statistics: provide a summary of the test-items’
reliability based on how well each test-item is separated in terms of achievable difficulty,
and also a summary on how well the test-items fit the Rasch IRT model;
• Log file: a log of the Quest program run of the test-item analysis;
• Output file: detail analysis of each test-item based on observed responses; and
• Kidmap: a graphical output for each participant that shows their correct and incorrect
response patterns.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 129
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

After running the Quest program for the first time and examining the resulting estimate output
files (test-item fit map, output file and summary of test-item estimates and fit statistics), it was
clear that the draft TBA instrument needed refinement. Figure 6.2 shows the test-item fit map.
Two test-items were found to have an INFIT MNSQ value of below 0.77 (test-item 4 and test-
item 15). This means that both test-items provided redundant information. Before the data could
be pilot tested further, these two test-items needed to be deleted from the analysis.

However, looking at the ICT-literacy ability that was to be tested for both test-items, the
researcher decided that test-item 4 (ability to set margin correctly for a Word document) was an
important ability that had to be tested, and should not be deleted from the instrument. In general,
misfitting test-items should be excluded from an instrument before the next pilot testing
procedure. However, there are some instances where it is considered necessary for a certain test-
item to remain, and overfitting test-items can remain in the scales (Yuan 2005). As such, the
only test-item that was deleted was test-item 15 (managing file) (Figure 6.3). This was because
the ability to manage files was tested again in test-item 31 (see Appendix D).

Figure 6.3. Test-item fit map (after test-item 15 was deleted)

The next step would be to look at the Quest variable map. The variable map provides an
excellent visual description of participants’ perceptions with respect to question response
options. The participants (or cases) are shown placed according to their standing on a single
scale. This procedure was employed in this study in order to estimate the difficulty levels
(threshold values) of the test-items, and to develop a common scale for each data set. The
smaller the proportion of correct responses, the higher the difficulty of a test-item, hence the

Investigating ICT-literacy assessment tool:


Page 130 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

higher the test item’s scale location. Once test-item locations are scaled, the person locations are
measured on the same scale. As a result, person and test-item locations are estimated on a single
scale as shown in Figure 6.4.

The Quest variable map showed that the participants’ scores were distributed relatively
symmetrically around the scale average value. The average value of the test-item threshold was
set at zero, with the more difficult test-items positioned above the average test-item threshold
and the easier test-items below the zero threshold value. As the test-items increase in difficulty,
they were shown on the variable map relative to their positive logit value, whilst negative logit
values were shown in the map representing the easier test-items (Figure 6.4). Eleven test-items
were located above 0 (average) and ten test-items were located below 0. Test-item 29 (ability to
use advanced features in a presentation application), and test-item 34 (understanding the use of
carbon copy (usually appearing as a cc in an email) were regarded as being particularly
difficult. Four participants had scored below 0, indicating a low ability, with one participant
having a particularly low score i.e. below –1.0 logits. The participants’ scores were
predominantly above 0, demonstrating that they have a relatively high ICT-literacy ability.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 131
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

harder test-items

The figures on the extreme left of


the map represent the logit scale
on which both test-items and
cases (persons) are calibrated.

The XXs on the left hand side of


the map represent the distribution
of case (person) estimates over
the logit scale. average

The figures on the right-hand side


of the map represent test-items
plotted according to their
difficulty.

easier test-items

Figure 6.4. Quest variable map

Next, the reliability of the draft TBA instrument had to be verified. Each test-item in the TBA
instrument must measure what it is supposed to measure. Using the Rasch IRT model, the test-
items’ reliability can be identified by looking at how well each test-item is separated in terms of
achievable difficulty. For this reason, the adjusted item standard deviation (SD (adjusted)) was
used to describe the extent to which test-items were separated by difficulty (Wright & Masters
1982). The reliability of test-item separation can be calculated using the following formula:

Investigating ICT-literacy assessment tool:


Page 132 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

SAI2
RI =
SDI2 source: (Wright & Masters 1982)

where SA = Adjusted Standard Deviation


SD = Standard Deviation

Using the Quest program, this was equal to the reliability of estimate value (Figure 6.5),
otherwise expressed as:
1.43 2
RI =
1.60 2
= 0.80

The reliability of test-item separation of 1.00 indicates that the test-items were well separated
relative to the errors of their locations on the scale. This value is equivalent to Cronbach Alpha,
under the traditional analysis (Andrich 1982). For the Cronbach Alpha value, Nunnally argued
that in the early stages of research, reliabilities of 0.70 would suffice, and that for basic research,
it was argued that ‘increasing reliabilities beyond 0.80 is often wasteful of time and
money’ (Nunnally & Bernstein 1994, p. 265). As such, for this study, the target level of
minimum reliability was set in the 0.70 to 0.80 range.

These are the means and


standard deviations of the
weighted (Infit) and
unweighted (Outfit) fit
statistics in their mean square
and transformed (t) forms.
When the data are
compatible with the model,
the expected value of the
mean squares is
approximately 1.00 and the
expected value of the t-
values is approximately 0.

Figure 6.5. Summary of test-item estimates and fit statistics

However, the Rasch IRT model cannot provide estimates for perfect (all correct) or zero (all
incorrect) scores. The Rasch IRT estimates are based on probability of success to probability of
failure ratios (Bond & Fox 2007). For example, 80% of the probability of success has 20%

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 133
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

probability of failure, and from this a ratio could be constructed. Yet a 100% success has a 0%
failure, and a 0% success has a 100% failure. As such, the 100/0 or 0/100 fraction would
produce an infinite estimate (Bond & Fox 2007).

For this study, twelve test-items were deleted from the analysis, where three of them generated
zero scores and nine test-items generated perfect scores. The deleted test-items were: test-items
1, 3, 7, 9, 10, 12, 20, 21, 24, 25, 26 and 33. These tested the participants’ ability on:
• test-item-1: register as a member for a new discussion forum account;
• test-item-3: post a reply to the correct thread in a discussion forum;
• test-item-7: using word processing features to create a table of contents;
• test-item-9: correct use of a basic spreadsheet formula;
• test-item-10: correct use of advanced spreadsheet formula;
• test-item-12: taking a picture with a digital camera;
• test-item-20: using a natural language search;
• test-item-21: using a Boolean search;
• test-item-24: naming a suitable computer application for a digital presentation;
• test-item-25: using basic features of a presentation software application (text,
background, insert new slide, slide design, transition);
• test-item-26: inserting a photo into a digital presentation; and
• test-item-33: attaching a file to an email.

However, if these test-items were to be eliminated from the draft TBA instrument, it would
mean that only half of the test-items in the instrument would be left. Most of the ability that was
tested in these twelve test-items was important to identify the participants’ ICT skills and
knowledge. Thus the next step for the researcher was to carefully re-examine the test-items in
the draft TBA instrument.

6.4.5 Pilot testing-1: Instrument review

Based on the results of Quest program run-1, it was necessary that the first approach towards
establishing reliability of the test-items was to change the evaluation technique previously
applied. Using dichotomous value to evaluate the participants’ ICT ability proved to be
problematic for some of the test-items. For example, in test-item-3, it generated zero score
during pilot testing-1. For this test-item, the participants were required to post a reply to a pre-
identified thread in a discussion forum. It was initially anticipated that the participants would
either be able to, or be unable to post a reply to the discussion forum. Yet during pilot testing-1,

Investigating ICT-literacy assessment tool:


Page 134 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

12 participants (out of 16) were able to post a reply, except they posted their reply to the wrong
thread.

In another example, test-items-9 and 10 were designed to test the ability to use basic or
advanced spreadsheet formula to calculate students’ exam marks. There is a possibility that the
participants may not be able to solve the task (which requires developing an advance
spreadsheet formula). This was proven during pilot test-1 where test-item-10 (using advance
spreadsheet formula) scored zero score, while test-item-9 (using basic spreadsheet formula)
scored perfect score. The limited spreadsheet skills of the Malaysian trainee teachers had been
predicted by one of the PoE members during the Delphi rounds, who suspected that Malaysian
trainee teachers might not have acquired the needed skill as it is not a compulsory requirement
for school teachers (see Chapter 5 section 5.2.4). Many schools in Malaysia still use a manual
technique (hand-calculator) to count their students’ marks before keying them into a computer.
Hence this evaluation technique must be changed to avoid the previously mentioned problem of
having perfect scores and zero scores.

Similarly, the evaluation technique for test-item-20 (Internet searching ability using natural
language (human language such as English or Malay) and test-item-21 (Internet searching
ability using Boolean operators: AND, OR, NOT)) must be changed. All the participants in the
pilot testing-1 used natural language searching. Yet the researcher believes that there was a
possibility that at least one of the trainee teachers during the final ICT-literacy TBA instrument
trial used a Boolean search. It is important for the researcher to know the level of Internet
searching skills of the participants. Except by using a dichotomous (either right/wrong) scoring
format to evaluate each test-item, it is likely that both these test-items would need to be removed
from the analysis due to receiving perfect and zero scores.

Test-items-1, 7, 12, 24, 25, 26 and 33, all achieved perfect scores. Just like the previously
discussed test-items, the researcher needed to employ a different method for evaluating these
ICT skills. These test-items were considered important when considering ICT-literacy and
therefore must be included in the analysis. Therefore, a different type of evaluation or scoring
scheme was required.

In order to arrive at a more precise estimate of a person’s ability, rather than just using a simple
pass or fail score, a partial credit format can be implemented (see section 2.5). Partial credit
format identifies several ordered levels of a person’s ability (Masters 1982). In a partial credit
format, test-items were scored as 0, 1, 2, 3, etc. However, these scores do not depict imposed
weighting, but rather the level of expected performance. As such, a test-item may be divided into

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 135
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

a number of steps where 0 represents the lowest level of performance. The relative difficulties
of these steps can vary from test-item to test-item. For example:

Mathematics item: 7.5/ 0.3 −16 = ?


Score
Failed…………… 0
7.5 / 0.3 = 25……..1
25 – 16 = 9…….….2
9 = 3……………3

Figure 6.6. Example of a partial credit format ‘steps’ and scores

For these reasons, changes were made to the TBA evaluation form. test-items that scored zero or
perfect scores were amended using the partial credit format (see Appendix E). Using this new
evaluation form, data analysis of pilot testing-1 was conducted again, using the same data.

6.4.6 Pilot testing-1 (repeated): New TBA evaluation form

After the above-mentioned changes of scoring had been made to the TBA evaluation form, the
draft TBA instrument now consisted of 21 test-items, with eight test-items using the partial
credit format. These eight test-items were:
• Test-item-1: the ability to use online discussion forum
The steps for the partial credit format consisted of four phases: 1) unable to complete [0
score]; 2) register new account [1 score]; 3) post a reply [2 score]; and 4) reply to the
correct thread [3 score];
• Test-item-5: creating a table of contents in a word processing document
The steps for partial credit format were divided into three steps: 1) unable to complete
[0 score]; 2) create a table of contents manually; and 3) use special feature in the word
processing application to create a table of contents [2 score];
• Test-item-6: using spreadsheet formula
There were three steps: 1) unable to complete [0 score]; 2) use basic spreadsheet
formula – basic arithmetic operations [1 score]; and 3) use advanced spreadsheet
formula [2 score];
• Test-item-8: using ICT tools (still picture, video and scanner)
The partial credit format was divided into four steps: 1) unable to complete [0 score]; 2)
able to use only one ICT tool [1 score]; 3) able to use two ICT tools [2 score]; and 4)
able to use all tools [3 score];

Investigating ICT-literacy assessment tool:


Page 136 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

• Test-item-13: Internet searching


The steps for partial credit format were: 1) unable to complete [0 score]; 2) use natural
language search [1 score]; and 3) use Boolean search [2 score];
• Test-item-16: using best presentation application to create instructional resource
The partial credit format was divided into three steps: 1) unable to complete [0 score];
2) use basic features only – text, background, slide design, slide transitions [1 score];
and 3) use advanced features – insert media, hyperlink/action button [2 score];
• Test-item-17: inserting media into their teaching and learning resource
The steps were: 1) unable to complete [0 score]; 2) able to insert only one media [1
score]; 3) able to insert two media [2 score]; and 4) able to insert three media [3 score]; and
• Test-item-21: using email
The partial credit format steps were: 1) unable to complete [0 score]; 2) able to send
email with one of these included – attachment or carbon copy [1 score]; and 3) able to
send email AND include an attachment AND carbon copied to another recipient [2
score].

The new list of test-items, which included the partial credit format, were as listed in Table 6.5
below.
Table 6.5. List of test-items that include partial credit format

1. Using online forum 12. Listed acceptable criteria for credible website
2. Set margin correctly 13. Internet searching
3. Set page number correctly 14. Choose credible websites
4. Set document header and footer correctly 15. Internet navigation – bookmark
5. Create TOC 16. Using presentation app. to create T&L
6. Using spreadsheet formula resources
7. Correct way of preparing a graph 17. Inserting media
8. Using ICT tools (still picture, video & scanner) 18. Proper citation
9. Name acceptable picture editing application 19. Manage file
10. Picture resized correctly 20. Add new database information (Basic)
11. Know how to evaluate credible website 21. Using email

The Quest test-item fit map produced from the re-tested pilot testing-1 data (Figure 6.7) showed
one test-item with an INFIT MNSQ value of below 0.77 (test-item-2), which means that the test-
item provided redundant information, and one test-item had the INFIT MNSQ value above 1.30
(test-item-8), meaning the test-item did not discriminate well (Adams & Khoo 1996). These two
test-items were then re-examined prior to further analysis.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 137
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.7. Test-item fit map (re-tested)

Despite the fact that both tasks were important for this study, the researcher decided to
delete test-item-8 (using ICT tools). This was done for two reasons.
1) During the pilot testing-1 process, the researcher observed that some of the participants
were anxious to be the first person to perform this task. Though it had been clearly
declared and understood by the participants that they were not allowed to discuss any
part of the tasks among themselves, it was observed that many were not sure what they
were doing, thus they observed what others did instead.

Alternatively, the researcher tried to put the ICT tools in a corner that was slightly
concealed from other participants. Despite being partially hidden, the participants were
still able to sneak observatory glances at another participant who did the task. As a
result, unfortunately this task no longer portrayed the participants’ ability and
knowledge in using ICT tools.

2) Test-item-17 (inserting media into the teaching and learning resource) involved the use
of the data collected from test-item-8. After discussion, the researcher decided that test-
item-17 was sufficient in evaluating the ability to use and knowledge in using ICT tools
(test-item-8).

Investigating ICT-literacy assessment tool:


Page 138 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.8. Test-item fit map (after test-item-8 was deleted)

After test-item-8 was deleted, another two test-items were misfitting (Figure 6.8). The test-items
were test-item-2 (set margin correctly) and test-item-11 (know how to evaluate credible
website). Test-item-2 was providing redundant information, while test-item-11 did not
discriminate well. Redundant information means that other test-items are testing the same
construct, while do not discriminate well means that the test-items are not able to differentiate
between participants with low ability and higher ability.

The next step for the researcher was to review the Quest test-item analysis results (Figure 6.9)
for both test-items. The Quest test-item analysis assisted the researcher to understand the
necessary changes for both test-items. Figure 6.9 verified the previous findings of test-item-2
from the test-item fit map: with INFIT MNSQ value of 0.69 and discriminant value of more than
0.5 (0.68). The p-value for able to do (value 1) was less than 0.05, yet for unable to do (value
0) was more than 0.05. As such, the wording structure for this test-item needed to be revised.
Mean ability for able to do (value 1) was also higher than unable to do (value 0). It showed that
participants with more ability managed to correctly complete the test-item and vice versa.

Figure 6.9. Test-item analysis results for observed responses (test-item-2)

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 139
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.10 verified the previous findings concerning test-item-11 from the test-item fit map:
with INFIT MNSQ value of 1.38. This test-item was unable to discriminate between participants
with higher ability and lower ability (discriminating value of less than 0.5 (0.29)). The p-value
for both able to do (value 1) and unable to do was very high. Mean ability for able to do (value
1) was lower than unable to do (value 0). It showed that participants with higher ability did not
manage to correctly complete the test-item.

Figure 6.10. Test-item analysis results for observed responses (test-item-11)

After discussion, it was decided that test-item-11 was too broad and problematic to evaluate. If
the participants provided an answer, it would be difficult to evaluate them as the answer could
not be evaluated as able to or not able to complete task. In order to avoid this complicated
situation, the researcher needed to find a way to confine the answers and devise a better way to
evaluate them. So instead of giving the participants freedom to choose a suitable and reliable
website, and require the participants to list the criteria that made them choose a particular
website, the researcher decided to list four different types of websites with different usability
functions. For instance, like: a blog; a university website; a Wikipedia page; and an
informational website. The participants were required to evaluate, based on their knowledge and
opinion, whether these websites were reliable or questionable; then provide comments on their
decision. The researcher evaluated this situation based on the scoring scheme for that test-item.
The data for this test-item was evaluated using partial credit format, based on how many of the
four websites were correctly evaluated by the participants.

The partial credit format for test-item-11 was arranged as such:


• Test-item-11: know-how to evaluate a credible website
There were five steps: 1) unable to complete [0 score]; 2) one status correct [1 score]; 3)
two status correct [2 score]; 4) three status correct [3 score]; and 5) four status correct [4
score].

Investigating ICT-literacy assessment tool:


Page 140 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Thus with these changes the researcher decided to allow both test-items (test-item-2 (set margin
correctly) and test-item-11 (know how to evaluate credible website)) to remain in the draft TBA
instrument.

Next, the reliability of the draft TBA instrument was verified by the researcher to ensure that
each test-item in the instrument measured what it was supposed to measure. The reliability of
estimate was 0.80, which was quite high (Figure 6.11). However, two test-items were deleted
from this analysis because they resulted in perfect scores. The test-items were test-item-6 (using
spreadsheet formula) and test-item-13 (Internet searching).

These are the means and


standard deviations of the
weighted (Infit) and
unweighted (Outfit) fit
statistics in their mean
square and transformed (t)
forms. When the data are
compatible with the model,
the expected value of the
mean squares is
approximately 1.00 and the
expected value of the t-
values is approximately 0.

Figure 6.11. Summary of test-item estimates and fit statistics

Then the person and test-item locations were estimated on a single scale, exhibiting a variable
map (Figure 6.12). The variable map showed that the participants’ scores were distributed
relatively symmetrical around the scale average value.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 141
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

harder test-items

The figures on the extreme left of


the map represent the logit scale
on which both test-items and c
ases are calibrated.

The XXs on the left-hand side of


the map represent the distribution
of case estimates over the logit
scale.
average

The figures on the right-hand side


of the map represent test-items
plotted according to their
difficulty.

easier test-items

Figure 6.12. Quest variable map (re-testing pilot test-1)

The participants’ scores (denoted by the X’s in the Quest variable map above) were
predominantly around 0 on the logit scale, demonstrating that they have a relatively average
ability in ICT-literacy. Ten test-items were located above 0, with test-item-16.2 (use advanced
features of a presentation application – insert media, hyperlink/action button), and test-item-
21.2 (able to send email AND include an attachment AND carbon copied to another recipient),
being the most difficult test-item.

Four participants scored below 0, indicating low ability, with one having a particularly low
score of below –2.0 logits. The easiest test-items were test-item-5 (creating table of contents)
and test-item-11 (know how to evaluate credible websites).

Investigating ICT-literacy assessment tool:


Page 142 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

After the re-testing of the pilot test-1 data, there were four test-items that needed to be revised.
Two of them were test-item-2 and test-item-11 because they provided redundant information
and did not discriminate well, respectively. Both test-items were analysed and re-structured.
Another two test-items (6 and 13) had been automatically deleted during the Quest estimate
runs since both test-items provided perfect scores. Thus these four test-items were to be
amended prior to the next process – the pilot testing-2.

6.5 Pilot Testing-2

The pilot testing-2 commenced once the pilot-1 TBA instrument had been re-structured and
amended. The four test-items (2, 6, 11 and 13) that were identified during the pilot testing-1
stage had been reviewed. They were:
• Test-item-2: set margin correctly: during the pilot testing-1 procedure, it was found that
there was no misunderstanding with regard to this test-item wording structure. Yet after
an informal discussion with the participants, the researcher discovered that the
measurement unit setting for margins in the word processing application used in the test
laboratory was using inches, whilst the instruction in the TBA instrument required them
to set the margin in centimetres. This might contribute to the problem of redundant
information. The researcher changed the instruction into inches.

• Test-item-6: using spreadsheet formula: there were two parts to this test-item. The first
part required participants to utilise the spreadsheet formula by totalling the student
marks, and the second part tested the participants’ ability to rank the student marks based
on their percentages. Both abilities were scored together as one. However, the second
part of this test-item proved problematic when the responses were either wrongly
understood by the participants, or they just left the marks out without ranking them
(either because they forgot about them, or they were unable to execute the task) from
highest to lowest. As a result, this affected the participants’ ability to do the first
question. Consequently, the researcher dropped the second question (ranking the marks).
The reason for this was that the earlier question had already tested the participants’
ability to use the spreadsheet formula through implementing total marks and percentages
calculations.

While the participants were executing the task of calculating the total marks and
percentages, the researcher observed that some of the participants did try to answer this
test-item using their calculator, since the task instruction did say, "Use whatever method

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 143
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

that you are comfortable with". The pilot-1 TBA instrument was actually designed to
observe the participants’ ICT skills and ability, or otherwise. Thus another task step was
added to the partial credit-scoring format of this test-item, where the use of other
calculating methods was included.

• Test-item-11: know how to evaluate credible websites: the scoring for this test-item was
converted to using partial credit format. Instead of having an open-ended question, the
participants were given a list of four pre-identified websites. The participants were
required to decide on the website being either reliable or questionable, and provide their
reasons. The scoring was based on how many correct statuses (of the four websites) that
the participants acquired. The steps for the partial credit format were divided into five
steps: 1) unable to complete [0 score]; 2) one status correct [1 score]; 3) two status correct
[2 score]; 4) three status correct [3 score]; and 5) four status correct [4 score]. Also, based
on the changes made in test-item-11, the researcher decided to remove test-item-12,
which required the participants to list acceptable criteria for a credible website as this was
redundant.

• Test-item-13: Internet searching: the researcher decided to maintain this test-item as it


was, and wait to see if there were to be any changes identified from the pilot data after
pilot testing-2 was conducted.

Aside from the above test-items, the wording structure of the steps in three of the tasks was also
modified. Based on the researcher’s observation, these modifications were necessary in order to
avoid misinterpretation:

• Task 4(b): the participants were required to resize a picture to 400 x 300 pixels.
However, some of the participants used Paint as the tool to resize the picture, and Paint
used percentages for resizing. To avoid misunderstanding or complications and the risk
of the participants leaving this task incomplete, the researcher restructured the
instruction and included the clause, ‘ … or 40% x 30% off the original size’.

• Task-1: The participants were required to browse a discussion forum website that was
developed by the researcher for this study. They were expected to register onto the
online forum and then post a reply to the correct discussion thread. Yet after registering
themselves, many of the participants had difficulties logging onto the online forum.
After a few unsuccessful tries, they proceeded to re-register themselves. Problems
occurred when they tried to re-register. The unique research identifier (user name) used
Investigating ICT-literacy assessment tool:
Page 144 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

had already being saved in the discussion forums’ database during the first try.
Consequently, they could not re-register using the same username. Upon further
investigation, this problem apparently occurred due to the carelessness of the
participants. Most of them tried to proceed with all the tasks as fast as they could
without stopping to read any account activation information or double-checking their
work. The forum website actually notified the participants that they need to activate
their account first through the link, which was sent to them through their email
accounts, before they could log onto the discussion forum. Yet participants overlooked
this step, and as a result, their account was not activated. As a prevention measure it
was decided to add in a cautionary step in the task instructions. This additional
instruction reminded participants to carefully read all information during registration
and also reminded them that the discussion forum account had to be activated first
before they could log in.

• Test-item-15: (Internet navigation – bookmarking): The participants were instructed to


use the Internet to find information on ‘products of photosynthesis’. The participants
were then asked to bookmark the websites that they wanted to use for the next task.
However, some participants were confused with the term bookmark used in the task
instructions. Upon further investigation, this confusion was apparently due to the fact
that most of the computers in the computer laboratory at the university used Internet
Explorer (IE) as the Internet browser. Thus the participants were more used to the IE term
favourites. Although it seems this is a trivial issue, it does influence the study and
documents the participants’ ability to use Internet browsers. Hence the instruction for this
task was rewritten and the term favourites was added to the instructions.

After all the re-structuring and amendments were completed, the pilot_1 TBA instrument was
ready for pilot testing-2.

6.5.1 Pilot testing-2: Round-1


This pilot study was conducted in two rounds. The first round was to validate the amended pilot-
1 TBA instrument, while the second round was planned to accommodate any final changes to the
instrument.
• Participants: Fifty undergraduate trainee teachers from the Faculty of Business and
Economics, UPSI, were invited to take part in the pilot study. However, only 20 trainee
teachers were willing to participate.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 145
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

• Location: The researcher decided to change the location of the computer laboratory.
This was due to the informal feedback from the pilot testing-1 participants. They
expressed concerns regarding understanding the test-items quickly and as easily as they
usually could due to the fact that they were more used to the Microsoft Office 2003
environment. Hence for the pilot testing-2, arrangements were made with the computer
laboratory technician to install half of the computers in the laboratory with Microsoft
Office 2007 and the other half with Microsoft Office 2003.

This new location had capacity for 30 participants. The computer laboratory holds 30
desktop computers connected to a local area network, using the Windows 2000
operating system. Aside from the Microsoft Office software, each of the computers also
included: Adobe Photoshop CS3, Paint and Microsoft Office Picture Manager for
picture editing; and Internet Explorer; Mozilla Firefox and Google Chrome for web
browsing. This computer laboratory already had a scanner that the participants could
use. Two digital cameras were borrowed from the University’s Centre for Educational
Technology and Multimedia.

Like the first pilot study, these sessions were also recorded with the screen capture program
(Screen2exe) that generated visual files of the entire session. As before, the recorded screen
capture files were matched with their corresponding participant based on their unique research
identifier.

Figure 6.13. Test-item fit map (Pilot test-2)

Investigating ICT-literacy assessment tool:


Page 146 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Four test-items (4, 10, 15 and 16) were found to have redundant information (Figure 6.13),
with INFIT MNSQ less than 0.77. The ICT abilities evaluated for these test-items were:
• Test-item-4: set page number correctly;
• Test-item-10: correctly resizing a picture;
• Test-item-15: creating a teaching and learning resource using a presentation
application; and
• Test-item-16: inserting media into teaching and learning resources.

Test-item-11 scored the INFIT MNSQ above 1.30, indicating that this test-item does not
discriminate well.
• Test-item-11: knowing how to evaluate credible websites.

Further investigation of these five test-items was carried out. Test-item analysis results for
observed responses for all five test-items were examined. Figure 6.14 and Figure 6.15 verified
the test-item fit map findings of test-item-4 and test-item-10, with INFIT MNSQ value of less
than 0.77 and discriminating value of more than 0.5. This revealed that both test-items showed
redundant information, yet both were able to discriminate between participants having higher
ability and lower ability. The p-value for both scoring (able to do and unable to do) was 0. This
showed that both scores were equally significant. Mean ability for able to do (value-1) was also
higher than unable to do (value-0). It showed that participants with higher ability did manage to
correctly complete the test-item and vice versa. As such, there was no misunderstanding with
regard to this test item’s wording structure.

Figure 6.14. Test-item analysis results for observed responses (test-item-4)

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 147
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.15. Test-item analysis results for observed responses (test-item-10)

Test-item-11 was scored using partial credit format, with five categories of partial
understanding (see Chapter-2 section 2.5). Figure 6.16 indicates that this test-item was not able
to discriminate between participants with higher ability and lower ability (discriminate value
0.12).

Figure 6.16. Test-item analysis results for observed responses (test-item-11)

Figure 6.17 showed that test-item-15 consisted of redundant information (INFIT MNSQ value
less than 0.77). This test-item also had problems in discriminating between participants with
higher ability and lower ability (discriminate value 0). However, with this group of participants,
with the partial credit format scoring used, participants with higher ability managed to complete
the multi-step problems (see Chapter-2 section 2.5) and vice versa (mean ability for score-2 was
higher than mean ability for score-1). Nonetheless, the structure of this test-item had to be
revised.

Investigating ICT-literacy assessment tool:


Page 148 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.17. Test-item analysis results for observed responses (test-item-15)

Test-item-16 also involved redundant information (INFIT MNSQ 0.67) with a high
discriminating value (0.82) (Figure 6.18). There were problems with the partial credit format
scoring score-1 and score-2. Inconsistency also occurred between score-1 and score-2 as
participants with higher mean ability between the two were scoring score-1. Either the question
wording in test-item-16 confused the participants or the participants with lower ability managed
to complete this test-item through trial and error.

Figure 6.18. Test-item analysis results for observed responses (test-item-16)

All of these test-items were important for this study. Therefore, the researcher decided to allow
these overfitting test-items to remain in the instrument. However, the structure of these test-items
was reviewed in order for the participants to understand them better (Appendix F).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 149
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

harder test-items

average

easier test-items

Figure 6.19. Quest variable map (Pilot test_2)

The variable map (Figure 6.19) showed the participants’ score on a single scale. Seventeen
participants scored above average on the TBA instrument. Three participants scored below
average, with the lowest score at below –2.0. The scores skewed upwards, demonstrating that
this group of participants had relatively high ability regarding ICT-literacy. test-item-12.2
(Internet searching – using Boolean search) and test-item-11.4 (know how to evaluate credible
websites) were the hardest test-items, while test-item-5 (setting document header and footer
correctly) was the easiest one.

Next, looking at the item estimates, the INFIT MNSQ and the infit-t value (Figure 6.20), the
reliability of estimate was quite high at 0.73 and both the INFIT MNSQ and infit-t value were
0.99 and 0.16 respectively. This confirms that the instrument was reliable and the data proved to
be compatible with the model (Adams & Khoo 1996).

Investigating ICT-literacy assessment tool:


Page 150 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.20. Summary of test-item estimates and fit statistics

Furthermore, the researcher encountered a few problems while evaluating the tasks completed
by this group of participants. In Task-4(c), the participants were given an activity where they
were required to create a suitable teaching and learning resource that could be used as a
classroom learning aid. For this activity the participants were expected to include a brief
introduction of the topic and must include the three media files (still picture, video and scanned
document). The participants were free to decide which computer application was most suitable
for them: i.e. appropriate for a classroom environment and could be incorporated into the three
media files.

Based upon the requirements for this task, it was anticipated that the participants would choose a
presentation-type computer application such as Microsoft PowerPoint, Corel Presentations or
Impress. However, two participants chose to use Microsoft Word for this task. In order to
accommodate this option, the researcher decided that test-item-15 should be revised. The revised
version was a partial credit format of test-item-15 (creating a teaching and learning resource
using a presentation application):
• Test-item-15: creating a teaching and learning resource
There were three steps: 1) unable to complete [0 score]; 2) using other type of computer
application [1 score]; and 3) using presentation-type computer application [2 score].

Another problem concerned test-item-18 (using MS Access features). The task required the
participants to fill in two different database forms. The first was a straight-forward blank form

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 151
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

where the participants were required to type in the details given to them, and to include them in
the blank form. The second database form required the participants to look up the data they had
just typed in, and add a few more database records. Six participants had a problem locating the
data in the second database form. Instead, the participants added the same data again in this
second form, thus duplicating the record. In order to recognise this flaw, the researcher decided
to change the steps in the partial credit format. The revised version for evaluation of
test-item as:
• Test-item-18: using MS Access features
There were three steps: 1) unable to complete [0 score]; 2) manage to add parts of the
new record [1 score]; and 3) manage to add all data for the new record [2 score].

Due to the changes made for test-items-6, 11, 15, 16 and 18, the pilot data was required to
be re-evaluated to incorporate these changes. The same data was used for this re-evaluation.

6.5.2 Pilot testing-2: Round-2

The test-item fit map of the re-evaluated data (Figure 6.21) showed four test-items in the
instrument to have INFIT MNSQ value of below 0.77, meaning that the test-items provided
redundant information, and one test-item having the INFIT MNSQ value above 1.30, meaning
the test-item did not discriminate well. The test-items were 4, 9, 10, 14 and 15. These were the
same test-items that were identified during pilot study round 1.

Figure 6.21. Test-item fit map (Pilot test-2 round-2)

Next data to be evaluated were the person and test-item distribution on a single scale map.
Referring to Figure 6.22, the map was skewed upwards, with only two participants who scored
below average, sixteen participants scored above average, and two participants on the average
mark. The hardest test-items were test-items 11.2 and 10.4, while the easiest was test-item-5.

Investigating ICT-literacy assessment tool:


Page 152 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

harder test

avera

easier test

Figure 6.22. Quest variable map (Pilot test-2 round-2)

Looking at the item estimates, the INFIT MNSQ and the infit-t value (Figure 6.23), the reliability
of estimate recorded at 0.71 and both the INFIT MNSQ and infit-t value were 0.99 and 0.15,
respectively. This showed that the instrument was reliable and the data was compatible with the
model (Adams & Khoo 1996).

Figure 6.23. Summary of test-item estimates and fit statistics (Pilot testing-2 round-2)

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 153
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

A discerning observation was made by the researcher at this point of the study, referring to the
arrangement of the tasks as suggested by the PoE members in previous Delphi interactions. The
ability to utilise an online discussion forum task was not the easiest task, as predicted by the
PoE members. As a matter of fact, by making this the first task, some participants’ morale
declined which may have affected their focus and ability to respond to other tasks. Thus this
task had to be re-arranged.

Another observation pertaining to the order of the tasks concerned Task-4(b). In order to make
the flow of the tasks smoother and clearer to participants, the researcher took the first part of
Task 4(b) and moved it to a different location. Originally, Task-4(b) required the participants
to: 1) browse four pre-identified websites and assess their reliability; 2) conduct an Internet
search; 3) copy and paste the URL address of the website that proved to be suitable; and 4) they
were required to bookmark the selected website. However, the first step for this task did not
really coincide with the subsequent steps and this confused the participants. Consequently, test-
item-1 was removed from this task and repositioned as the second test-item to be conducted after
the task that tested the participants’ ability to use an online discussion forum. test-items-2, 3 and
4 of Task-4(b) remained unchanged.

The final step for Task-5 was also modified. This task was all about testing the participants’
ability to use a database application. However, the last step tested their ability to use email. Thus
the last step was removed from Task-5 and developed as a separate new task (Task-6).

The final instrument was established and it consisted of six tasks (Appendix G). These were
as follows.

Task-1: The participants were required to create a new (digital) folder and name the folder
ICTexperiment. It was explained that all their work needed to be saved in this folder.
The participants were then asked to open a word processing document. With this
document they were required to change the margins, insert page numbers and insert a
header and footer to the document.
Task-2: Using the Microsoft spreadsheet file, which was created by the researcher, the
participants needed to calculate total marks and percentages of the marks for each
student in the list. The participants were allowed to use any other calculating method
including a calculator, in case they were unable or did not have the skill to use MS
Excel formula. They were also required to create a graph that would reflect the
students’ achievements.

Investigating ICT-literacy assessment tool:


Page 154 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Task-3(a): The participants were provided with a digital camera and a scanner. They were
asked to use both technologies in order to take a picture of a small plant, take a video
of them watering the small plant, and also scan a diagram given to them. All files
were to be saved in the ICTexperiment folder (created previously).
Task-3(b): The participants were asked to name one computer application that they know could
be used for photo editing. Using either the named application or other photo editing
applications available in the computer, they were required to resize the picture of a
small plant that they had taken in Task-3(a). The picture was to be resized to 400 x
300 pixels or 40% x 30% from the actual picture.
Task-3(c): To test the participants’ ability to search the Internet for correct and reliable
information, they were asked to search on the topic ‘Products of photosynthesis’.
They were to copy and paste the URL address of the website on the provided
document, and they also needed to bookmark the website.
Task-3(d): Using data and information gathered from Task-3(a), 3(b) and 3(c), the participants
needed to create a teaching and learning resource that could help them in their
classroom. The topic was ‘photosynthesis’. They have to include: a simple definition
of photosynthesis; a resized picture of a small plant; a video of watering a plant; a
diagram of the photosynthesis process; information on the products of photosynthesis;
and, finally, a reference list.
Task-4(a): Using a discussion forum website developed by the researcher, the participants were
required to register themselves and write an appropriate response to a pre-identified
discussion forum thread.
Task-4(b): The participants were asked to browse four pre-identified websites and evaluate
their reliability and trustworthiness. They also needed to provide reasons for their
evaluation.
Task-5: Using a simulated database that was developed by the researcher to resemble the
Malaysian school student information system, the participants were required to add
new student information and the students’ grades into the database. The system uses
two different database forms: one for student details and another for student grades.
However, the students’ grade database form was linked to the student details
information, and therefore the information for student details needed to be inserted
only once.
Task-6: The participants were required to email a document to the researcher and also send a
carbon copy of the email to a different email account of the researcher. They needed
to insert their given unique research identifier as the subject of their email.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 155
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

In order to evaluate the tasks, a test-item evaluation checklist was developed. After the pilot
study, the test-item evaluation checklist was reduced to 19 test-items (see Table 6.6 below and
Appendix H). The tasks were scored using both dichotomous scoring (yes/no) and partial credit
format.
Table 6.6. List of finalised test-items included in the ICT-literacy TBA instrument

1. Manage file 11. Choose credible websites


2. Set margin correctly 12. Internet navigation – bookmark
3. Set page number correctly 13. Use presentation app. to create T&L resources
4. Set document header and footer correctly 14. Inserting media
5. Using spreadsheet formula 15. Proper citation
6. Correct way of preparing a graph 16. Using online forum
7. Name acceptable picture editing application 17. Know how to evaluate credible website
8. Picture resized correctly 18. Using MS Access features
9. Manage file 19. Using email
10. Internet searching

The validity and reliability of the tasks in the ICT-literacy TBA instrument had been tested and
were now ready to be used. As such, the ICT-literacy TBA instrument was to be trialed on a
larger number of participants. Thus the researcher decided to choose a suitable cohort of students
from UPSI and invited the whole cohort to participate in this instrument trial process. The
outcomes were later analysed and are discussed in the next section.

6.6 Final Instrument Trial Process

Similar to the pilot studies, arrangements had to be made prior to the actual instrument trial
process. Arrangements were made for the location to conduct the trial, availability of computer
applications and peripherals and inviting a large body of students who may be willing to
participate.
• Participants: the researcher procured an authorisation from UPSI for its latest students’
record list (Table 6.7). This record lists the number of students currently enrolled in the
University from Semester-1 to Semester-11 (students on an extension semester). At the
point of writing this thesis (2010), the University had 10,224 students enrolled in nine
different faculties.

Each faculty offers different majoring programs for the trainee teachers. Student intakes
for the University each year are based on the Ministry of Education Malaysia projected
teacher requirements for the country. Each year, the University offers courses and
places based on this projection. Therefore, looking at Table 6.7, it was clear that some
majoring courses were no longer being offered (e.g. primary education, sport

Investigating ICT-literacy assessment tool:


Page 156 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

psychology and visual communication arts) and some were only recently offered (e.g.
Tamil language).

Table 6.7. Total UPSI students by faculty/program/semester (for year 2010)


SEMESTER
FACULTY PROGRAM TOTAL
1 2 3 4 5 6 7 8 9 10 11
LANGUAGE & AT01: Malay Literature 20 40 40 41 1 2 144
COMMUNICATIONS AT05: Malay Language 139 146 120 1 129 535

AT06: TESL 132 58 13 47 89 5 1 1 346


AT49: Arabic Language 20 17 37 74
with Education
AT50: Chinese Language 21 89 49 48 207
with Education
AT62: Tamil Language 34 34
Total: 366 0 350 62 292 1 259 6 2 1 1 1340
MANAGEMENT & AT08: Accounting 74 36 26 1 48 22 1 208
ECONOMICS AT18: Economics 2 25 40 19 2 88
AT21: Business 74 48 1 30 2 155
Management
AT24: Education 9 27 36
Management
AT45: Entrepreneurship 100 2 70 25 62 32 291
& Commerce
Total: 259 2 181 1 76 1 150 103 5 0 0 778
EDUCATION & AT04: Guidance & 27 44 32 103 1 175 3 385
Counselling
HUMAN
AT10: Special Education 79 87 2 87 1 131 3 390
DEVELOPMENT
AT19: Early Childhood 97 80 90 24 16 2 309
Education
AT34: Primary Education 1 2 3
Total: 203 0 211 124 214 18 307 7 3 0 0 1087
MUSIC & AT22: Music 18 30 22 15 24 9 9 2 1 130
PERFORMING ARTS Total: 18 0 30 22 15 24 9 9 2 1 0 130
VOCATIONAL & AT07: Home Economics 47 1 104 152
TECHNICAL AT09: Agricultural 56 2 119 177
Science
EDUCATION
AT31: Life Skills 41 63 88 128 167 229 2 718
Total: 144 3 286 88 128 167 229 2 0 0 0 1047
SPORT SCIENCE & AT03: Sport Science 20 20 1 29 31 28 2 131
COACHING AT42: Sport Psychology 1 18 2 3 24
AT43: Coaching Science 29 51 31 14 1 34 2 162
AT59: Physical Education 97 70 167
Total: 146 51 121 15 29 2 83 32 5 0 0 484
HUMAN SCIENCE AT32: History 170 2 172 151 108 179 2 1 785
AT33: Geography 50 101 1 123 21 182 20 498
AT35: Islamic Studies 2 203 53 96 102 36 492
AT41: Moral Studies 229 71 49 1 90 440
AT58: Malaysian Studies 197 4 310 3 211 300 1025
Total: 646 8 857 57 630 532 487 22 0 0 1 3240
SCIENCE & AT11: Biology 71 71
MATHEMATICS AT12: Physics 40 40
AT13: Chemistry 19 1 40 60

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 157
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

SEMESTER
FACULTY PROGRAM TOTAL
1 2 3 4 5 6 7 8 9 10 11
AT14: Mathematics 98 105 36 62 3 6 1 311
AT16: Science 125 70 150 1 146 4 7 1 1 505
AT48: Science 49 68 32 94 1 244
(Mathematics)
Total: 291 1 394 0 218 1 302 8 13 2 1 1231
ARTS, COMPUTING & AT20: Information 153 19 1 2 175
Technology
CREATIVE INDUSTRY
AT23: Arts 81 40 13 166 65 77 24 1 467
AT44: Visual 1 63 11 1 76
Communication Arts
AT46: Multimedia 15 38 1 4 58
AT47: Computerised 18 47 3 41 1 1 111
Design Technology
Total: 267 85 40 13 166 85 144 77 8 1 1 887
TOTAL STUDENTS BY SEMESTER: 2340 150 2470 382 1768 831 1970 266 38 5 4 10224

In UPSI, all trainee teachers were required to take a three-credit Introduction to Information
Technology & Communication (UTM1013) subject during their first semester. This is a
compulsory introductory subject on basic computer and educational information technology
(IT) skills. By their fourth semester, they should not only have acquired enough education-
based IT knowledge as trainee teachers, they have also chosen their majoring course. Therefore,
all enrolled semester-four trainee teachers from all nine faculties were invited by the researcher
to participate in this study.

All trainee teachers in UPSI were also introduced to APA-style citations and bibliography in
their first semester. Assignments given by lecturers to the trainee teachers usually require them
to apply the APA-style citations. Fourth-semester trainee teachers would have picked up enough
practice, knowledge and skills in using APA-style citations.

A total of 382 trainee teachers were invited with all faculties represented except for Science and
Mathematics (there are no semester-four trainee teachers currently enrolled under this
faculty)(see Table 6.7). However, only 148 trainee teachers were willing to participate (Table
6.8). The difficulty in getting more willing participants was partly due to their own time
constraints. Since it was in the middle of the semester, it was difficult for the trainee teachers to
dedicate two-and-a-half hours of their academic class schedule for this study. Moreover, the
location for this instrument trial was situated well away from their faculty classrooms, making it
difficult for some trainee teachers to participate and return on time for their next lecture.

Investigating ICT-literacy assessment tool:


Page 158 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.8. Participant distribution by faculty/gender

PARTICIPANTS BY GENDER
FACULTY TOTAL
MALE FEMALE
Language & Communications 3 18 21
Management & Economics 1 0 1
Education & Human Development 5 16 21
Music & Performing Arts 8 14 22
Vocational & Technical Education 14 26 40
Sport Science & Coaching 2 5 7
Human Science 8 18 26
Arts, Computing & Creative Industry 4 6 10
TOTAL: 45 103 148

• Location: the same computer laboratory used for the pilot study previously was used
again for this instrument trial process. Again, two digital cameras were borrowed from
the University’s Centre for Educational Technology and Multimedia. The sessions were
also recorded using Screen2exe, which captured the computer screen activity of all
participants.

The next sections (sections 6.6.1, 6.6.2, and 6.6.3) begin with the preparations for the instrument
trial process. Personal observations during this process are also discussed and the findings
explained below.

6.6.1 Final instrument trial: Preamble

The procedure commenced with the researcher giving the participants a 10-minute background
summary of the study and what is expected of them. The participants were assured that their
identifying details and answers were strictly confidential, and their answers/performance
outcomes would not influence their current university grades. The participants were left to
complete the tasks in any particular order. They were given two hours to finish all the tasks. It
was also explained to them that the scoring objective of this study was not to grade or discredit
them over their performance in this study. On the contrary, they were advised that their ability/
inability in ICT-literacy would provide valuable data for this study. The data would provide the
researcher with information regarding trainee teachers’ potential deficiencies in certain ICT-
literacy areas. This information could later be used to customise a suitable method of blending
much-needed ICT-literacy skills into the trainee teachers’ training modules.

As mentioned before, each participant was given a unique research identifier to enable the
researcher to link the participants’ answers in their ICT-literacy TBA instrument with their

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 159
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

screen capture file. To ensure the integrity of the answers given by them, the researcher avoided
answering any questions from the participants with regard to the tasks given.

6.6.2 Final instrument trial process: Observation

Dealing with the different educational backgrounds of the participating trainee teachers proved
to be challenging. Through the researchers’ observations, some of them were very keen to
perform the tasks, appearing to work quickly, while some regarded the whole exercise with
indifference. Some of them were perfectionists, where they tried to ensure that they completed
all tasks correctly. Hence this group of trainee teachers took a longer time in checking their
work and making sure their answers were correct. However, this group of trainee teachers took
more time to complete each task, showing frustration at the time allocated, as it did not afford
them enough time to perform the tasks properly. While others seemed to understand that what
they did not know was as important to the study as what they did know. Therefore, after a few
tries, when they were unable to complete a certain task, they left it and proceeded to the next
one.

6.6.3 Final instrument trial process: Findings

All 148 participants completed all six tasks in the ICT-literacy TBA instrument. The responses
for the ICT-literacy TBA instrument were coded by the researcher using the test-item evaluation
form (see Appendix H). The responses were then entered into an electronic data file using
Microsoft Excel. A text file was then prepared for the Quest analysis (Adams & Khoo 1996).

Figure 6.24. Summary of test-item estimates and fit statistics (instrument trial)

Investigating ICT-literacy assessment tool:


Page 160 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Looking at the instrument trial reliability of estimates, it revealed that the TBA instrument
maintained its reliability (Figure 6.24). The INFIT MNSQ value and the infit-t value also
indicated that the data was compatible with the Rasch IRT model. No test-items scored zero or
perfect scores, which means that all test-items were analysed, none were automatically deleted
from the Quest estimate. These findings confirmed the results of the pilot study-2 round-2 that
this ICT-literacy TBA instrument can be used as an instrument to evaluate Malaysian trainee
teachers’ ICT-literacy levels. All the necessary instrument validation and reliability testing
processes (including the pilot studies) had been conducted with the tasks in the TBA instrument
revised and re-structured accordingly.

Next, after the trainee teachers’ ICT-literacy data had been collected through the ICT-literacy
TBA instrument, the researcher needed to evaluate this data. This activity was important in order
to demonstrate that the ICT-literacy TBA instrument was robust.

6.7 Trainee Teacher’s ICT-literacy Data Diagnostic

The Quest estimate produces a Kidmap of an individual participant’s performance in terms of


depicting their correct and incorrect response patterns (Figure 6.25) according to the Rasch IRT
model’s expectations. The Kidmap locates each test-item on a vertical scale according to its
difficulty from easiest to hardest and then separates the test-items horizontally (left or right),
according to whether the participant answered them correctly or not. Importantly, the map
locates the participant’s ability on the same vertical scale (marked with XXX).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 161
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Figure 6.25. Kidmap – showing an individual’s performance

According to the Rasch IRT model, an individual has an increasing probability of achieving
test-items below their ability estimate and a decreasing probability for achieving items above
their estimate (Adams & Khoo 1996). The test-items achieved by each participant are plotted on
the left hand side and the test-items not achieved are plotted on the right hand side of this map.
The participant’s ability estimate and their fit to the model (INFIT MNSQ value) were reported
in the Kidmap (see top right hand side of Figure 6.25). In order to interpret the individual’s
performance data, Figure 6.26 below shows how the Kidmap is read and interpreted. It shows
the participant’s achievement and also the areas where their performance is weak.

Investigating ICT-literacy assessment tool:


Page 162 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Test-items correctly Test-items incorrectly


performed unexpectedly, performed as expected,
given the participant’s given the participant’s
ability in this test (see ability in this test (see
XXX). The participant may XXX).
have guessed their answer to
these test-items or possesses
an unexpected area of
strength.

Test-items correctly
performed as expected,
given the participant’s
ability in this test (see Test-items incorrectly
XXX). performed when it is
expected they should have
been correct, given the
participant’s ability in this
test (see XXX). This result
shows where this participant
should concentrate more
effort to succeed with this
skill development task.

Adapted from Ryan & Williams (2007)


Figure 6.26. Interpreting the Quest Kidmap

Another important document required for this diagnosis was the ICT-literacy TBA instrument
test descriptor (Table 6.9). This descriptor lists the test-item number, together with its learning
domain (as depicted earlier in the test-item instrument’s specification matrix Table 5.6) and the
associated competency description.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 163
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.9. Test-item descriptor: ICT-literacy TBA instrument

Test-item ICT Indicators


Competency Description
Number Tested
Able to organise, classify and store information in a
1, 9 Manage computer; apply to an existing classification information
scheme to store information, and its source
Understanding and Able to operate a computer, use emails, manage files, use
2, 3, 4, 19 handling ICT tools basic teaching and learning computer-based module; and
use basic word processing application
Able to utilise ICT tools to assist them in assessing student
5.1, 5.2 Assess
learning in schools
Able to use advanced ICT tools (e.g. advanced features of
word processing, spreadsheet, database and presentation
Production and
6, 18 software) and understand the different features of each
analysis
software, and the type of document each software
application produced
Able to determine the nature and extent of the information
needed to solve a problem, which includes identifying key
7 Plan/define
concepts of the problem and develop potential strategies
for a solution
Able to collect and/or retrieve digital information required
8, 14 Access from various digital media and sources using appropriate
software and ICT tools that suit the required needs
Able to select and use appropriate search engines, use the
appropriate searching keywords, construct complex
10.1, 10.2, queries; and use advanced search features
Navigation & search
12
Able to upload and download digital information, and
understand the concept and use of the function bookmark
Able to synthesise, summarise, compare, and contrast the
11 Integrate
various bits of information from multiple sources
Able to adapt, apply, design, or construct
information/resources in digital environments, which
13.1, 13.2 Create
includes: graphics, documents, presentations and web
pages
Able to adhere to copyright rules and manage to properly
15 Reflect
cite and give due credit to the author of the source
Able to collaborate and communicate with various people
in a variety of contexts and also work in a team. Easily
16.1, 16.2, Communicate/
adapt and use various learning contexts such as through
16.3 collaborate
discussion forums, appropriate chat rooms and e-groups to
disseminate information relevant to a particular audience
Able to judge and evaluate the degree to which digital
information satisfies the needs of a given task, which
17 Evaluate includes determining the authority of the source, bias,
timeliness, and relevance

An example is shown in Figure 6.27 where ‘Candidate-4’ has an ability estimate of 0.75, fit
index of 0.99 and a total score of 62%. The fit index, which is an INFIT MNSQ value, indicates
a value that is proximate to the predicted Rasch IRT model response, which is +1.0. The
participants’ ability estimate is plotted with XXX in the centre column. This particular
participant has a 50% probability of answering test-items at their estimated ability.

Investigating ICT-literacy assessment tool:


Page 164 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

The dotted line on the left indicates the upper bound of this participant’s ability estimate and the
dotted line on the right indicates the lower bound.

Figure 6.27. Kidmap for Candidate-4 (showing good fit to the model with fit index = 0.99)

Test-item-2, which tested participants’ understanding and ability in handling ICT tools was
correctly performed despite a less than 50% probability of success, whilst test-items-6, 14, 17,
8, 9 and 4 were incorrectly executed despite having more than 50% probability of success.
Candidate-4 would not have been expected to complete test-items –5.2, 11 and 17 that are
located above the probability of success.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 165
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Furthermore, based on this Kidmap, Candidate-4 needs to work on test-items-4, 8 and 9


immediately. The description of the test-items, the ICT-literacy indicator(s) tested and the
instructional objective(s) targeted are described below:

Table 6.10. Descriptors of Candidate-4 unexpected incorrect test-items

Test ICT Indicator(s) Competency Description


Item Tested
No.
4 Understanding and Able to operate a computer, use emails, manage files, use basic
handling ICT tools teaching and learning computer-based module; and use basic word
processing application
8 Access Able to collect and/or retrieve digital information required from
various digital media and sources using appropriate software and
ICT tools that suit the required needs
9 Manage Able to organise, classify and store information in a computer;
apply to an existing classification information scheme to store
information, and its source

Apart from this, Candidate-4 needs to improve their knowledge and skills pertaining to these
learning domains:

Table 6.11. Descriptors of Candidate-4’s expected incorrect test-items

Test ICT Indicator(s) Competency Description


Item Tested
No.
5.2 Able to utilise ICT tools to assist them in assessing student
Assess
learning in school
11 Able to synthesise, summarise, compare, and contrast the various
Integrate
bits of information from multiple sources
17 Able to judge and evaluate the degree to which digital
information satisfies the needs of a given task, which includes
Evaluate determining the authority of the source, bias, timeliness, and
relevance

For test-item-17, Candidate-4 was only able to correctly evaluate the authority, bias, timeliness
and relevance of two (out of four) websites listed by the researcher.

Investigating ICT-literacy assessment tool:


Page 166 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Another example is shown here in Figure 6.28 where Candidate-8 has a low ability estimate
of 0.20, fit index of 0.92 and a total score of 48.28%.

Figure 6.28. Kidmap for Candidate-8 (showing good fit to the model with fit index = 0.92)

Candidate-8 (Figure 6.28) did not have any unexpected incorrect test-items; however, there
were many test-items that were valued as being higher than the participant’s ability. Be that as it
may, in order to be recognised as an ICT literate trainee teacher and by implication be able to
teach in an MSS environment, Candidate-8 needs to work more on the learning domains stated
below.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 167
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.12. Descriptors of Candidate-8’s expected incorrect test-items

Test- ICT Indicator(s) Tested Competency Description


item No.
Able to utilise ICT tools to assist them in assessing student
6 Assess
learning in school
Able to synthesise, summarise, compare, and contrast the
11 Integrate
various bits of information from multiple sources
Able to select and use appropriate search engines, use the
appropriate searching keywords, construct complex queries;
and use advanced search features
12 Navigation & search
Able to upload and download digital information, and
understand the concept and use of the function bookmark
Able to adapt, apply, design, or construct
13 Create information/resources in digital environments, which include:
(13.1 & 13.2)
graphics, documents, presentations and web pages
Able to collect and/or retrieve digital information required
14 Access from various digital media and sources using appropriate
software and ICT tools that suit the required needs
Able to adhere to copyright rules and manage to properly cite
15 Reflect
and give due credit to the author of the source
Able to collaborate and communicate with various people in a
variety of contexts and also work in a team. Easily adapt and
16 Communicate/collaborate use various learning contexts such as through discussion
forums, appropriate chat rooms and e-groups to disseminate
information relevant to a particular audience
Able to judge and evaluate the degree to which digital
information satisfies the needs of a given task, which includes
17 Evaluate determining the authority of the source, bias, timeliness, and
relevance

Candidate-8 had correctly completed two test-items that had been estimated as higher than this
participant’s ability. The two test-items were:

Table 6.13. Descriptors of Candidate-8’s unexpected correct test-items

Test- ICT Indicator(s) Competency Description


item No. Tested
Able to operate a computer, use emails, manage files, use basic
Understanding and
2 teaching and learning computer-based module; and use basic
handling ICT tools
word processing application
Able to utilise ICT tools to assist them in assessing student
5.2 Assess
learning in schools

Investigating ICT-literacy assessment tool:


Page 168 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Another example is shown in Figure 6.29 where Candidate-40 has a high ability estimate
of 2.82, fit index of 0.99 and a total score of 89.66%.

Figure 6.29. Kidmap for Candidate-40 (showing good fit to the model with fit index = 0.99)

Candidate-40 was able to correctly complete all test-items except test-item-5.2 and 17.
These test-items tested the participants’ competency as shown below.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 169
Chapter-6 : Data Analysis and Findings – Phase-3 Instrument validation and testing

Table 6.14. Descriptors of Candidate-40 incorrect test-items

Test Learning Domain Competency Description


Item No.
Able to utilise ICT tools to assist them in assessing student
5.2 Assess
learning in school
Able to judge and evaluate the degree to which digital
information satisfies the needs of a given task, which includes
17 Evaluate determining the authority of the source, bias, timeliness, and
relevance

For test-item-17, Candidate-40 was expected to be able to correctly evaluate the authority, bias,
timeliness and relevance of three (out of four) websites chosen by the researcher in the ICT-
literacy TBA instrument. However, Candidate-40 only managed to identify two websites (test-
item-17.1 and test-item-17.2). For test-item-5.2, even though the test-item was predicted as
having more than 50% probability of success, the analysis grouped this test-item as harder not
achieved.

6.8 Chapter-6 Summary

This chapter explained the iterative process of validating the ICT-literacy TBA instrument ready
to be implemented in the final instrument trial process. Based on the draft TBA instrument,
previously validated by the PoE members (Chapter-5: Expert judgement on ICT-literacy
indicators), the draft TBA instrument was made ready for the pilot studies and the final
instrument testing process. The majority of original tasks in the draft TBA instrument had to be
revised and re-structured. Eight out of the 19 test-items left in the final version were scored
using a partial credit-scoring format. This chapter also explained how the ICT-literacy TBA
instrument data would be analysed and interpreted. The next chapter will discuss both findings
(qualitative and quantitative) of this study.

Investigating ICT-literacy assessment tool:


Page 170 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

7
Discussion of the Results

7.1 Overview

The previous chapters discussed the plans for the mixed methods research, presenting the
literature review of ICT-literacy assessment strategies in Chapter-2 and the conceptual research
model in Chapter-3; the research design in Chapter-4; the qualitative approach of the Delphi
interactions with the panel of experts (PoE) members in Chapter-5; and the quantitative
approach involving the ICT-literacy TBA instrument validation and reliability testing with the
trainee teachers in Chapter-6.

This chapter elaborates on the findings from the previous chapters. The research questions are
also discussed in the light of these findings, with comparisons between the currently available
ICT-literacy (paper-based, self-efficacy) instruments and the new ICT-literacy TBA instrument
being explored.

The research questions for this study are:


1. What are the suitable ICT-literacy indicators for trainee teachers’ ICT-literacy
assessment?
2. How can the proposed task-based ICT-literacy assessment evaluate trainee
teachers’ ICT-literacy levels?

This discussion comprises the following sections:


• Answering the research questions;
• Comparison with existing instruments; and
• Chapter-7 summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-7: Discussion of the Results

7.2 Answering the Research Questions

Reflecting on the study’s research questions, this section aims to further justify the answers for
both research questions in sections 7.2.1 and 7.2.2 below.

7.2.1 What are the suitable indicators for trainee teachers’ ICT-literacy assessment?

To answer this question, the ICT-literacy indicators were identified in Chapter-3 through an
examination of the literature and current ICT standards. This investigation revealed that the
most reviewed and adapted ICT-literacy framework is the higher education ICT proficiency
model proposed by the International ICT Literacy Panel (International ICT Literacy Panel
2002). It listed seven critical components: define; access; manage; integrate; evaluate;
create; and communicate. These components formed the backbone for this thesis. Five
more ICT indicators were proposed by the researcher, based on currently available
ICT-literacy assessment tools and the MSS ICT skills requirement (assess; reflect;
understanding and handling ICT tools; production and analysis; and navigation and
search). The literature also suggests that in order to evaluate ICT-literacy, the
assessment must not only test the participants’ knowledge of ICT, it must also include an
assessment of their ICT-based skills.

Thus in order to observe both ICT skills and knowledge of the trainee teachers,
this thesis proposes a task-based method of assessment to better evaluate their
level of ICT-literacy.

To validate the suitability of the ICT indicators for Malaysian trainee teachers, feedback from
the panel of experts (PoE) members was obtained through the use of the Delphi technique. After
two Dephi rounds, the PoE members reached a consensus, where they agreed that all 12 ICT-
literacy indicators were relevant for evaluating ICT-literacy amongst trainee teachers in
Malaysia. No additional ICT-literacy indicators were suggested. Consequently, this information
forms the proposed model for evaluating ICT-literacy amongst trainee teachers in Malaysia
(Figure 7.1).

Investigating ICT-literacy assessment tool:


Page 172 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-7: Discussion of the Results

define access manage integrate

evaluate create communicate reflect

ICT ICT
assess understanding and handling ICT tools
SKILLS KNOWLEDGE

production & analysis navigation & search

ICT-literacy

Figure 7.1. Proposed ICT-literacy assessment framework for trainee teachers

This ICT-literacy assessment framework extends the earlier higher education ICT proficiency
model as proposed by the International ICT Literacy Panel (Figure 7.2). In the new ICT-literacy
model shown above, the ICT ethical and cognitive proficiencies are grouped under ICT
knowledge, while ICT technical proficiency is known as ICT skills.

Define

Access
Cognitive
Manage
ICT-
literacy Integrate Ethical

Evaluate
Technical
Create

Communicate

Figure 7.2. The higher education proficiency model (Williamson, Katz & Kirsch 2005)

This thesis believes that to achieve proficiency in ICT-literacy, a person must possess two
things: ICT skills and ICT knowledge. Findings from Delphi-1 study found that the PoE
members agreed that a different way of evaluation was required, instead of relying upon the
common self-efficacy tests. Expert-1 in Delphi-1 round-2 interaction proposed the need to
include skills in using other ICT tools such as the: digital camera/video recorder; scanner;
printer; and digital projector in the TBA instrument, instead of asking the trainee teachers
whether they know how to perform the task (see section 5.2.2).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 173
Chapter-7: Discussion of the Results

7.2.2 How can the proposed task-based ICT-literacy assessment evaluate trainee
teachers’ level of ICT-literacy?

Task-based ICT-literacy assessment requires the trainee teachers to perform actual ICT-based
tasks. Upon reflection, when randomly asked, a person might not recall exactly how they
performed a certain computer-based task. Yet if they were to be put in front of a computer, they
may/or may not be able to conclude the task. A person might also think they know how to
perform a certain computer-based task; except, that when they try to perform that task using a
computer, they may fail instead. Thus this thesis agrees with the International ICT Literacy
Panel (2002) suggestions that in order to develop a more effective ICT-literacy assessment tool,
a task-based assessment is more practical. Trainee teachers should be evaluated based on their
ability to either complete each required skill development task or produce the final product,
which is often a digital artefact such as a: spreadsheet, database, presentation files, etc.

Hence the ICT-literacy TBA instrument was developed with this proposition in mind. The final
version of the instrument consists of six computer-based tasks (see Appendix G). These tasks did
not put a restriction on how the participants should complete them. Some participants might
have a straightforward way of completing them, and some might have an awkward but effective
way to complete the tasks. The ICT-literacy TBA instrument recognised this and acknowledges
this as part of the trainee teachers’ learning process. How you do the tasks is not as important as
your ability to complete the tasks. As long as you understand the concept and context of the
tasks, you can learn new tricks later.

To effectively evaluate these ICT-literacy skills, the Rasch IRT model facilitated the positioning
of the participants in a unidimensional measurement model. This in turn places the participants
into their respective skill categories, where each individual’s area of ICT strengths and
weaknesses were identified with their associated ICT skills and knowledge that requires
improvement.

Using the Kidmap produced by the Quest analysis program, the answers given by each
participant were mapped into four possibilities.

1. Test-items correctly performed unexpectedly:


• the participant may have guessed their answer to these test-items or possess an
unexpected area of strength.
2. Test-items correctly performed as expected:
• based on the participants’ ability in this test.

Investigating ICT-literacy assessment tool:


Page 174 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-7: Discussion of the Results

3. Test-items incorrectly performed as expected:


• based on participants’ ability in this test.
4. Test-items incorrectly performed when it is expected they should have been correct:
• the participant should expend more effort to succeed with this task[s].

As previously explained in Chapter-6, each of the test-items in the tasks of the ICT-literacy
TBA instrument tie-in with the 12 ICT-literacy indicators. By identifying the test-item[s] that the
participants performed well in and which ones they did not, also informs which ICT-literacy
indicators were involved. This in turn assists the appropriate training management, such as the
universities, design a more relevant ICT-based training for their trainee teachers.

7.3 Comparison with Existing Instruments

Sections 7.3.1 and 7.3.2 below aimed to compare the ICT-literacy TBA instrument with existing
ICT-literacy assessment instruments in terms of the approach used and the content assessed by
the instrument.

7.3.1 Comparing the approach of the TBA instrument with the existing instrument

In Chapter-2 the conceptual research framework was used to discuss several existing ICT-
literacy assessment instruments (Murphy, Coover & Owen 1989; Compeau & Higgins 1995;
Torkzadeh & van Dyke 2001; Durndell & Haag 2002; Wong 2002; Jamieson-Proctor, Burnett,
Finger & Watson 2006; Markauskaite 2007; Ball & Levy 2008).

These earlier instruments relied on self-efficacy assessment. As such, the participants were
required to rate their own ICT ability based on questions using a Likert scale. As stressed
before, this assessment method is very useful if the aim of the evaluation is to uncover
participants’ perceived ICT-literacy levels, or perhaps their confidence in using ICT tools.

Such paper-based test-items almost always commence with a leading phrase such as: ‘I believe I
have the capability to …’. However, what people believe they know and what they actually know
are two different things altogether (Mehrens 1992; Bhatnagar & Kandan 2000; Vaglio-Laurin
2006). Instead of testing the participants’ ability to articulate or recall knowledge, task-based
assessment is testing whether the participant can actually put that knowledge to good use
(Vaglio-Laurin 2006). Instead, task-based assessment provided this thesis with data that are
more representative of the participants’ actual ICT-literacy levels.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 175
Chapter-7: Discussion of the Results

One of the primary motivations for devising task-based testing is the belief that user
performance competency is best demonstrated in a live setting. A task-based test is designed to
assess participants on what they know, and what they are able to do, as well as the learning
strategies they employ in the process of demonstrating the skill/task (Bhatnagar & Kandan
2000). Instead of them telling us what they know, the task-based test requires them to show what
they know, by demonstrating what they can/cannot do. If you want to find out whether a person
has the ability to ride a bicycle, the person needs to show you that they can ride a bicycle, not by
drawing circles on a piece of paper marking out how they perceive their capability to ride a
bicycle on a scale between 1 to 7, where 7 is being very capable and 1 being not capable at all.

Depending on the purpose of the evaluation, a task-based assessment offers the best insight into
the actual ICT skill development process. As an example, during the instrument testing sessions,
the participants were asked to attach a Word document to their email and send them to the
researcher’s email address. The researcher believes that if the participants were asked to rate
their ability to attach a file to an email, on a 7-point Likert scale, most of them would rate their
scale as above average. However, for 6 participants (4% of the participants in this study),
instead of attaching the file, they copied (pressing the computer short-cut keys CTL>C) the
entire test-item in the file, and then pasted (pressing the equivalent short-cut keys CTL>V) into
the body of the email, before sending the email to the researcher’s email address. This was a
demonstration of the way those particular participants attach a file.

Another common method of evaluating ICT-literacy is by a step-by-step instruction test where


participants need to follow and perform the required task (see http://www.ecdl.org). In this type
of prescriptive assessment model, participants are literally guided through the task, being
prompted to: open a certain file; told what to write and where to write it; what to do with a
certain sum amount, etc. The participants did not have much of a choice. Marks are given for
each completed step of each task.

This thesis is proposing that ICT-literacy assessment not only involves testing both ICT skills
and knowledge acquisition in the instrument itself, it must also test the three knowledge
development domains that include: declarative (Gagne 1985), procedural (Gagne 1985) and
meta-cognitive (Anderson et al. 2001). Aside from their ICT knowledge and ICT skills, the
participants must also know how to use this knowledge and apply the appropriate skills by
demonstrating they can: think critically; apply the newly acquired knowledge to different
situations; analyse screen-based information; generate new ideas; communicate; collaborate;
solve problems; and make (appropriate) decisions.

Investigating ICT-literacy assessment tool:


Page 176 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-7: Discussion of the Results

7.3.2 Comparing the contents of the TBA instrument with existing instruments

In educational environments, several studies have been conducted on trainee teachers and in-
service teachers; most involve perceptions and attitudes on preparedness to integrate ICT as
effective tools or for teaching ICT as a dedicated IT-related subject (Graham & Glen 1997;
Dawes 2000; Luke 2001; Knezek & Christensen 2002; Jamieson-Proctor, Burnett, Finger &
Watson 2006). Other studies examine computer efficacy perceptions (see Torkzadeh & van
Dyke 2001; Durndell & Haag 2002). One study included a 32-item Computer Self-efficacy Scale
(CSE) to measure perceptions of capability pertaining to specific computer-related knowledge
and skills (Murphy, Coover & Owen 1989). Since then the scale has been refined and modified
according to current IT needs.

In Malaysia, a number of perception and attitude research studies have been conducted that
concentrate on the usage and/or integration of ICT in instructional strategies. This work focuses
on technical abilities and differences of ICT competencies in: gender; differences in ICT
competencies between a different course of study and academic achievement and computer self-
efficacy; and anxiety and attitudes (Abang Ahmad, Hong & Aliza 2001; Hong, Abang Ekhsan &
Zaimuarifuddin Shukri 2005; Noor Azizi & Basariah 2005; Wong et al. 2005; Megat Aman
Zahiri, Baharuddin & Jamalludin 2007).

Others were commercially developed ICT-literacy assessment instruments (see Table 7.1).

Table 7.1. Example of commercially developed ICT-literacy tests

Name of Test Owner/Certifying Description


Authority
European/International ECDL Foundation • End-user computer skills certification
Computer Driving • Costs about £89.95 for study material and £100 for
Licence (ECDL/ICDL) log book and exam fees (per module)
Prentice Hall Train & Prentice Hall • Offers both training and assessment of ICT
Assess IT Testing Tool competency
(TAIT) • The content of the training and assessment are based
on Microsoft Office application
• Costs about US$60 (which includes the right to use
all the materials and tests in TAIT for a whole
semester)
iSkillsTM Educational • Evaluates the ability to perform several scenario-
Testing Service based tasks that also assess ability to define, access,
(ETS) manage, integrate, evaluate, create and
communicate digital information
• Costs about US$20 per test

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 177
Chapter-7: Discussion of the Results

However, none of these commercially developed ICT-literacy tests were specifically developed
for use in an educational setting for teachers or trainee teachers. In order to avoid resistance
from the trainee teachers, the testing needs to be as friendly as it can be. By having the test
developed in a familiar environment, using their normal day-to-day tasks as the skill-assessment
questions should make the trainee teachers feel at ease. This is important as the trainee teachers’
support and approval of this TBA instrument is important to ensure they will continue with a
positive attitude towards their ICT instruction strategies later when undergoing their in-service
placements.

7.4 Chapter-7 Summary

This chapter summarised the three research phases of this thesis in the light of the data analysis
and explained how the research questions for this study were fulfilled. There was further
comment on the mixed methods approach that was used to develop the ICT-literacy TBA
instrument that differentiates its instructional content with existing paper-based/self-efficacy
assessment tools. The next chapter concludes this thesis.

Investigating ICT-literacy assessment tool:


Page 178 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter

8
Conclusions

8.1 Overview

The previous chapter summarised the research claims explaining how the research questions for
this study were fulfilled. This final chapter concludes all the findings and puts forward a few last
points on the means to implement the TBA, while explaining the limitations of this research and
making suggestions for future research.

This chapter is divided into the following sections:


• The need for a new ICT-literacy instrument;
• Existing research and ICT-literacy standards;
• Expert judgements on ICT-literacy indicators;
• ICT-literacy TBA instrument validation and testing;
• The ICT-literacy TBA instrument: concluding thoughts;
• The ICT-literacy TBA instrument: points to consider;
• Limitations of the study;
• Unexpected findings;
• Suggestions for future research; and
• Chapter-8 summary.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

8.2 The Need for a New ICT-literacy Instrument

This study proposed a new approach to evaluating ICT-literacy. The focus was to create a new
ICT-literacy assessment instrument for trainee teachers in Malaysia that could evaluate their
actual ICT proficiency. The research, as it transpired, was based on the researcher’s professional
practice as a teacher trainer of ICT-related courses at one of the public universities in Malaysia.
As such, it is very frustrating to know that one is training final-year teacher trainees who will
become in-service teachers in less than a year, yet know they may lack the ICT knowledge and
skills needed to confidently apply their trade.

After all, these trainee teachers can easily demonstrate to you the functions of their
Smartphones, or perform an Internet search on a certain topic, yet unfortunately many of them
are unable to apply these skills and knowledge in a beneficial manner to their professional
practice in our knowledge society.

8.3 Existing Research and ICT-literacy Standards

A major feature of this study is the importance of taking the Malaysian Smart School (MSS)
standards into consideration to set the specialised educational technology context of the
research. Therefore the main focus of the research was to uncover which of the ICT-based skills
and knowledge acquisition strategies are considered the utmost important for a trainee teacher
in Malaysia to become ICT literate.

This thesis proposes that a task-based ICT-literacy assessment instrument is vital to evaluate the
level of ICT-literacy for trainee teachers. This type of specialised assessment tool is essential as
the educational sector’s schools in general and more specifically, in Malaysia, are currently
trying to improve their pedagogical techniques to include ICT in their instructional strategies. It
has been identified that Malaysian schools and their teachers need to undergo changes to their
instructional strategies as the newer generations of their students relate more comfortably with
ICT tools and digital gadgets than the more traditional instructional strategies that involve chalk
and talk.

The ICT-literacy TBA instrument proposed in this thesis is important as it helps to


prepare trainee teachers with the necessary ICT skills and knowledge prior to
their in-service teaching practice.

Investigating ICT-literacy assessment tool:


Page 180 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

Trainee teachers’ ICT-


literacy assessment Part-4
(Final instrument testing)

Task-based assessment (TBA) tool

Skills Part-3

Knowledge

ICT-literacy assessment tool + Malaysian Smart School Standards Part-2

Existing Research + Standards for ICT-literacy Part-1


Figure 8.1. Conceptual research framework

The conceptual framework for this thesis was divided into four parts (Figure 8.1). Based on the
findings from Parts-1 and 2, the draft TBA instrument was designed and tested in Part-3 (pilot
tests) and later validated by real trainee teachers in Part-4 (final instrument testing). Parts-1 and
2 of this conceptual research framework depict the starting point for this study. There have been
suggestions and findings from existing ICT-literacy studies; checking ICT-literacy standards;
evaluating ICT-literacy assessment tools; and the MSS requirements.

The initial starting point for this study is important because currently there is no
task-based ICT-literacy assessment instrument available, which is specifically
designed for teachers or trainee teachers, that affords the trainee/student flexibility
in answering the skill-based questions. There are no such skill assessment tools in
general and none that are custom-designed for developing countries in the Asian
region.

The concept of the knowledge society and its demand on the digital competency of its society is
timely (Drucker 1999; Leu, Kinzer, Coiro & Cammack 2004). While the focus of the knowledge
society concentrates on the world of economics and business, the awareness of a new type of
knowledge acquisition trickles through to schools and in the educational sector as we have come
to realise that digital competencies must be developed in people at an earlier age. Many
international organisations have written about such frameworks in position papers that define
and promote such digital competency requirements as the 21st century skills (Department of
Education Science and Training 2000; Lemke 2002; Partnership for the 21st Century Skills
2002; Pearlman 2006).
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 181
Chapter-8: Conclusions

As such, generalised standards for ICT-literacy have been emerging (McNaught 2006; ANZIIL
2008; ACRL 2009). These standards include principles and practice that can support
information literacy education in all educational sectors. These standards highlight key ICT-
based skills and knowledge development abilities that are essential for a person to function well
in our knowledge-rich society.

To identify levels of ICT-literacy, a number of ICT-literacy assessment tools have been


developed either by researchers (Dawes 2000; Christensen & Knezek 2002; Jamieson-Proctor,
Burnett, Finger & Watson 2006; Markauskaite 2007; Eisenberg, Johnson & Berkowitz 2010); or
by commercial enterprises, such as: the European/International Computer Driving Licence
(ECDL/ICDL); the Prentice Hall Train & Assess Information Technology (TAIT) testing tool;
and iSkillsTM.

Yet these tools were either too expensive to be implemented; the questions were too general and
lack flexibility; and do not encourage critical and analytical thinking. The most important
feature of these often paper-based assessment instruments was that they were not tailored to suit
the ICT-literacy needs of trainee teachers in general and more specifically for the emerging
Malaysian smart schools of the 21st century.

Thus this thesis is proposing that a task-based assessment instrument is best for evaluating
trainee teachers’ ICT-literacy levels. Nevertheless before a more appropriate skills assessment
instrument could be developed, the existing ICT-based skills and knowledge development tools
required exploration (for example: existing ICT standards and ICT assessment instruments).
This thesis had initially proposed 24 ICT-literacy indicators that were to be used in the
development of the tasks to be included in the ICT-literacy TBA instrument. These ICT-literacy
indicators were to:
1. understand the main computer applications;
2. have the ability to search, collect and evaluate electronic information;
3. be able to use appropriate aids to produce, present or understand complex information;
4. know how to access and search a website, and use Internet-based services;
5. possess the ability to use ICT tools to support critical thinking, creativity and innovation
in different contexts to the one presented during the initial training session;
6. be information and media literate;
7. have to ability to produce high productivity;
8. be aware of life-long learning;
9. develop life skills;

Investigating ICT-literacy assessment tool:


Page 182 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

10. be able to plan/define;


11. know how to access information;
12. integrate gathered information;
13. evaluate problem definitions;
14. manage digital media;
15. create problem-solving strategies;
16. communicate/collaborate in a digital environment;
17. reflect on lessons learned;
18. possess an ability to explain ICT-related hardware;
19. know how to handle ICT hardware;
20. possess an ability to identify ICT hardware/software problems;
21. know how to use software to facilitate instructional strategies;
22. possess the ability to use word processing and presentation software;
23. know how to utilise the Internet for finding information/material; and
24. possess the ability to utilise the Internet for communication.

These 24 ICT-literacy indicators were later revised and re-named to avoid redundant
information. In the end, there were 11 ICT-literacy indicators, described in Chapter-5 section
5.2.2, and repeated below as:
1. navigation and search;
2. production and analysis;
3. plan/define;
4. access;
5. integrate;
6. evaluate;
7. manage;
8. create;
9. communicate/collaborate;
10. reflect; and
11. understanding and handling ICT tools.

Next, this list of 11 ICT-literacy indicators was presented to the PoE members (see Chapter- 5
section 5.2.2). The PoE members examined these indicators for their suitability to be used to
evaluate Malaysian trainee teachers’ level of ICT-literacy.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 183
Chapter-8: Conclusions

8.4 Expert Judgements on ICT-literacy Indicators

Seven experts from different ICT/educational-technology-based backgrounds were selected and


invited to become PoE members. The Delphi technique was used to gather the information
needed regarding the ICT-literacy indicators and the suitability of the proposed TBA instrument
from the PoE members (see Chapter-5 section 5.2).

The PoE members were important for this study as they brought the tacit
knowledge and experience of the real world, such that the ICT-literacy TBA
instrument that was developed for this thesis truly represented the Malaysian
trainee teachers’ requirements for acceptable levels of ICT-literacy. The newly
developed instrument was therefore required to be validated by the people who
work in the same environment.

The Delphi technique provided the researcher with an avenue where a group of experts could be
gathered and could anonymously converse (with each other), even with the limitation of time
and space. The physical location of the PoE members was such that they were scattered
throughout Peninsular Malaysia. The Internet was successfully utilised for the Delphi
interactions, as physical face-to-face group meetings with all the experts was next to impossible.

The PoE members had initially agreed upon the 11 ICT-literacy indicators (see Chapter-5
section 5.2.2) between relevant to extremely relevant (mean score between 2.50 to 3.00).
However, the assess indicator was later added, as per a suggestion by the PoE members. The
now 12 ICT-literacy indicators were:

1. understanding and handling ICT tools;


2. plan/define;
3. access;
4. manage;
5. create;
6. communicate/collaborate;
7. production and analysis;
8. navigation and search;
9. assess;
10. integrate;
11. evaluate; and
12. reflect.

Investigating ICT-literacy assessment tool:


Page 184 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

These PoE members’ evaluation of these ICT-literacy indicators coincided with the view of
ICT-literacy by other researchers (Christensen & Knezek 2002; International ICT Literacy
Panel 2002; NAE & NRC 2006; Katz & Macklin 2007; Markauskaite 2007; ETS.org 2008;
Eisenberg, Johnson & Berkowitz 2010). Yet there was a difference with their evaluation that
deviated from other researchers. It was the assess indicator which was proposed by a PoE
member in this study. The reason given by the PoE member was because the teachers in
Malaysian schools were required to use online-based student assessment systems, thus it was
recognised that the skill and knowledge of how to use such an administrative academic/
university support system is necessary for trainee teachers. Other PoE members agreed with this
suggestion and scored this ICT-literacy indicator as relevant.

To help the researcher properly design this ICT-literacy TBA instrument, a test instrument
specification matrix (McKay 2000) was adapted for this study (see sections 4.6.2 and 5.2.3).
The original McKay test instrument specification matrix consisted of two instructional
objectives: declarative knowledge and procedural knowledge. Yet because of the increase in
today’s cognitive ability needs this means that educational/training environment requirements
must include instructional strategies to prepare for a society that knows how to think critically
and analytically; and has the internal mental processes of the mind that can be utilised to
promote effective learning (Drucker 2001; Leu, Kinzer, Coiro & Cammack 2004).

Thus the researcher included the meta-cognitive ability as proposed by Anderson et al. (2001) in
their revised version of the Bloom taxonomy. As a consequence, the test instrument specification
matrix was used as an instructional developmental guidance tool to ensure that not only all 12
ICT-literacy indicators were included in the ICT-literacy TBA instrument; it was also considered
necessary to ensure that the ICT-based tasks that were being developed for the TBA instrument
tested all three instructional objectives: declarative knowledge (Gagne 1985), procedural
knowledge (Gagne 1985), and meta-cognitive knowledge (Anderson et al. 2001).

By the end of the research Phase-2 (see Figure 8.2), a draft TBA instrument had been developed
and validated by the PoE members. In order for this instrument to be accepted and used, the
draft TBA instrument was later tested in a series of two pilot studies. The resulting final TBA
instrument was tested on 148 Malaysian trainee teachers in the final instrument trial.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 185
Chapter-8: Conclusions

Figure 8.2. Phases in the research design

8.5 ICT-literacy TBA Instrument Validation and Testing

The draft TBA instrument underwent a continual validation and testing process in a series of
pilot studies before it was tested on a large population of trainee teachers in the final instrument
testing process. The Rasch IRT model and the Quest interactive test-item analysis system
(Adams & Khoo 1996) were implemented as the data analysis tool for this performance data
validation and testing.

Each test-item was individually examined on its compatibility with the Rasch IRT model and its
ability to reliably distinguish between participants with high ability and low ability. This
activity established the rationale for choosing the Rasch IRT model. The Quest estimate’s item
analysis output file also showed each test-item's average achievement level, and provided a
signpost of test-items that were desirable to be reviewed (for example: for each wrong answer
lower than the right answer, and when the higher performers left out the answer when the less
proficient get it right; both these performance scenarios require further investigation (Izard
2005). Moreover, the Rasch IRT model is free from dependency on samples used in research
studies. Difficulties of test-items can be compared even if the participants are from different

Investigating ICT-literacy assessment tool:


Page 186 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

levels of ability (Wright 1999).

After pilot testing-2, the ICT-literacy TBA instrument was considered well validated and ready
to be used. Therefore this instrument needed to be tested on its intended population and with a
larger number of participants (Nunnally & Bernstein 1994). As such, the ICT-literacy TBA
instrument was tested on 148 semester-4 trainee teachers from nine different faculties in UPSI.

8.6 The ICT-literacy TBA Instrument: Concluding Thoughts

This thesis had initially been proposed as a result of the researcher’s own frustration as a teacher
trainer, teaching ICT-related courses in a public university in Malaysia. Based on the
researcher’s observations and experience, these trainee teachers are still ‘holding onto the past’,
where some of them still use large sheets of white paper and manila cards as learning aids in a
mock teaching class. Even the content of their learning aids was limited to what is available
from the textbooks. Ironically, these are the same trainee teachers who own the latest
Smartphone; iPad; and own the latest computers/tablets; and have Twitter and Facebook
accounts. Acquisition of the latest computer gadgets does not necessarily translate as having
better ICT knowledge and skills. When these trainee teachers were asked to use computer
applications to create a multimedia presentation or prepare a spreadsheet, a number of them were
at a loss and did not even know where to begin. Thus, implementing such a measurement
instrument that measures trainee teachers’ current ICT-literacy levels, and also identifies their
area[s] of weakness, would be a huge help.

In line with the country’s ‘Vision 2020’ aim of becoming a fully developed country, the
Malaysian Ministry of Education is currently supplementing every school in Malaysia with
appropriate ICT tools, and every school in Malaysia has been targeted to be upgraded to ‘smart
school’ status before the year 2020. This advancement totally changes the way current teachers
work today. Therefore to acquire the necessary ICT knowledge and skills is of crucial
importance.

So far, most of the ICT-literacy assessment instruments that are currently available are: either
too expensive to be implemented; the questions are not tailored to suit the ICT-literacy needs of
trainee teachers; and they do not encourage critical and analytical thinking. An alternative
assessment instrument is suggested by this thesis whereby a task-based assessment is used and
the tasks are designed to simulate a normal ICT-based activity in a Malaysian Smart School
(MSS).

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 187
Chapter-8: Conclusions

This ICT-literacy task-based assessment (TBA) instrument allows the trainee teachers to
complete each given task independently. Each test-item in the ICT-literacy TBA instrument was
carefully checked, based on the trainee teachers’ ability/non-ability to complete it. There are no
‘correct’ or ‘incorrect’ ways of completing each task. With task-based assessment the trainee
teachers are required to ‘show’ what they know, instead of just ‘telling’ what they perceive they
know.

To develop this instrument, a mixed research methodology was chosen whereby a group of
experts validated the identified ICT-literacy indicators, while semester-4 undergraduate trainee
teachers from UPSI were invited as participants for the instrument's final trial. The group of
experts were asked to validate a list of ICT-literacy indicators that were compiled based on:
existing literature on ICT-literacy; ICT standards; existing ICT-literacy assessment instruments;
and the MSS requirements. The draft TBA instrument was designed based on findings from this
group of experts.

Drawing from Gagne’s learning domains (1985) and Anderson et al.’s (2001) revised version of
Bloom’s taxonomy, a test instrument specification matrix (McKay 2000) was adapted to ensure
that the tasks in the ICT-literacy TBA instrument included all relevant areas of cognitive
knowledge: declarative, procedural and meta-cognitive. This thesis proposes that in order to
become ICT literate and to be able to function in a knowledge society, a person must have both
ICT knowledge and skills. With this new type of customised ICT-literacy TBA instrument, it
can efficiently and effectively evaluate both trainee teachers ICT knowledge and skills.

8.7 The ICT-literacy TBA Instrument: Points to Consider

Both the theoretical model (Chapter-7 Figure 7.2) and the ICT-literacy TBA instrument proposed
by this study are better tools because they evaluate the ICT-literacy levels of trainee teachers in
Malaysia, rather than continuing to rely on the existing paper-based assessment tools. In order to
successfully utilise the ICT-literacy TBA instrument, it is recommended that the following issues
be considered by the universities, as well as other teacher training institutions that offer teacher
training courses in general:

1. the instrument should be implemented in stages, preferably once during the first year of
teacher training study, and again during the final year of study. Any weaknesses in their
level of proficiency using ICT would be identified early and suitable action could be
taken to improve their ICT skills and knowledge;

Investigating ICT-literacy assessment tool:


Page 188 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Chapter-8: Conclusions

2. identify the type of computing environment that the trainee teachers are familiar with
(e.g. Windows 7, Microsoft Word 2010, etc.). The use of an unfamiliar computing
environment would affect the outcomes of this assessment; and
3. provide as little guidance and communication as possible during the assessment. This is
to allow the actual ICT skills and knowledge of trainee teachers to be assessed based on
their own understanding and interpretation of the computer-based tasks.

8.8 Limitations of the Study

This study demonstrates that by implementing a task-based assessment, the participants’ actual
ability in using ICT tools can be observed. However, there are a some limitations to the current
study that need to be mentioned here, including:

1. the timing required to conduct the whole test, which can be a bit challenging. At this
moment, the whole test requires two hours of the trainee teachers’ time to complete.
However, having more actual ICT tools may reduce the time taken. Through the
researcher’s observation, participants were required to wait for their turn in order for
them to use the ICT tools. During the pilot studies, the researcher only managed to
secure one scanner and two digital cameras. Acquisition of more of these tools should
decrease the time necessary for conducting the skills assessment; and
2. the seating arrangements in the computer laboratory used for the final instrument trial
process meant that participants were seated quite near to each other. This unfortunately
made it easy for the participants to discuss the test-items amongst themselves. During
this study the researcher played a strictly invigilator’s role to ensure that the data were
genuine and representative of each participant’s actual ability. For future
implementation, in a real classroom setting, a more formal exam-based seating
arrangement should assist in producing a more pragmatic outcome.

8.9 Unexpected Findings

During Phase-2 of this study, the panel of experts (PoE) members suggested to the researcher
that using the online discussion forum task should be the first task for the ICT-literacy TBA
instrument. Their argument was that trainee teachers are well versed in using the Internet and
that trainee teachers may find it very easy to complete any task that requires them to access the
Internet. However, through the researcher’s own observations after the first pilot test, it was not
as easy as the PoE members had expected it to be. Many of the trainee teachers were having

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 189
Chapter-8: Conclusions

problems registering into the online discussion forum, while others were having problems
finding the correct discussion thread. With all these minor setbacks, the morale of the trainee
teachers was negatively affected; they simply wanted to complete all the tasks without really
giving them much considered thought. Hence the order of the test-items was changed during the
second pilot test.

8.10 Suggestions for Future Research

The ICT-literacy TBA instrument is working as it is expected to. However, the efficiency and
effectiveness of the ICT-literacy TBA instrument will be enhanced when its shortcomings are
addressed. For instance: for the purpose of this study, the raw data that were collected for data
analysis used the working file that the participants’ created, as well as by watching the screen
recording of each participant which were recorded using the Screen2Exe application. As this
method requires a lengthy time period to collect the raw data from each participant, a new
method of raw data collection would further improve the current TBA instrument.

The ethical, legal and social issues surrounding the creation, collection, and use of information
are another aspect of information literacy that was not considered in this study. Therefore, a
further study which includes these aspects will provide added value to the ICT-literacy TBA
instrument.

Additionally, it would be interesting and indeed useful if the ICT-literacy TBA instrument could
be implemented in a comparison study of the achievement between such element as: gender;
courses; participant age and past experience in using ICT. This additional information would
further assist educational stakeholders and policy makers, and thereby ensure an enhanced
quality of the trainee teachers in the Malaysian teacher training institutions.

8.11 Chapter-8 Summary

This chapter concludes this thesis by revisiting the conceptual framework of the research.
The limitations and suggestions for future research were also addressed.

Investigating ICT-literacy assessment tool:


Page 190 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Reference list

Abang Ahmad, R, Hong, KS & Aliza, A 2001, ‘Teacher Educators’ Attitudes Toward
Computers: A Study Among Teacher Educators in Teacher-training Colleges in Johor,
Malaysia’, Jurnal Teknologi, vol. 35(E), pp. 21–32.

ACRL 2009, Association of College & Research Libraries, viewed 7 April 2009,
<http://www.ala.org/ala/mgrps/divs/acrl/about/index.cfm>.

Adams, RJ & Khoo, S-T 1996, Quest Version 2.1: The Interactive Test Analysis System, ACER
Press, Camberwell, Victoria, Australia.

Ainley, J, Banks, D & Fleming, M 2002, ‘The influence of IT: perspectives from five Australian
schools’, Journal of Computer Assisted Learning, vol. 18, no. 4, pp. 395–404.

Albion, P 1996, ‘Student-teachers’ Use of Computers During Teaching Practice in Primary


Classrooms’, Asia-Pacific Journal of Teacher Education, vol. 24, no. 1, pp. 63–73.

---- 2001, ‘Some Factors in the Development of Self-efficacy Beliefs for Computer Use Among
Teacher Education Students’, Journal of Technology and Teacher Education, vol. 9, no.
3, p. 321.

---- 2003a, ‘Graduating teachers’ dispositions for integrating information and communications
technologies into their teaching’, paper presented to Society for Information Technology
& Teacher Education 14th International Conference (SITE 2003), Albuquerque, New
Mexico, USA, <http://eprints.usq.edu.au/979/>.

---- 2003b, ‘Graduating teachers’ reflections about teaching with Information and
Communication Technologies’, paper presented to Society for Information Technology
and Teacher Education International Conference 2003, Albuquerque, New Mexico,
USA.

Anderson, LW, Krathwohl, DR, Airasian, PW, Cruikshank, KA, Mayer, RE, Pintrich, PR,
Raths, J & Wittrock, M 2001, A Taxonomy for Learning, Teaching, and Assessing: A
Revision of Bloom’s Taxonomy of Educational Objectives, Complete edn, Longman,
New York.

Anderson, RE 2008, ‘Implications of the Information and Knowledge Society for Education’, in
J Voogt & G Knezek (eds), International Handbook of Information Technology in
Primary and Secondary Education, Springer US, vol. 20, pp. 5–22.

Andrich, D 1982, ‘An index of person separation in latent trait theory, the traditional KR.20
index, and the Guttman scale response pattern’, Education Research and Perspectives,
vol. 9, no. 1, pp. 95–104.

---- 1999, ‘Rating Scale Analysis’, in GN Masters & JP Keeves (eds), Advances in Measurement
in Educational Research and Assessment, Elsevier Science Ltd, Oxford.

---- 2004, ‘Controversy and the Rasch Model: A Characteristic of Incompatible Paradigms?’,
Medical Care, vol. 42, no. 1, pp. 17–116.

ANZIIL 2008, Australian and New Zealand Information Literacy Framework: Principles,
Standards and Practice, 7 April, <http://www.anziil.org/index.htm>.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 191
Association of College and Research Libraries 2000, Information Literacy Competency
Standards for Higher Education, American Library Association.

Atherton, JS 2005, Learning and Teaching: Bloom’s taxonomy, viewed 7 December 2008,
<http://www.learningandteaching.info/learning/bloomtax.htm>.

Australian Trade Commission 2011, A proven testing-ground for global projects, viewed 5
April 2011, <http://www.austrade.gov.au/Invest/Opportunities-by-
Sector/ICT/default.aspx>.

Bachman, LF 2002, ‘Some reflections on task-based language performance assessment’,


Language Testing, vol. 19, no. 4, pp. 453–76.

Ball, DM & Levy, Y 2008, ‘Emerging educational technology: Assessing the factors that
influence instructors’ acceptance in information systems and other classrooms’, Journal
of Information Systems Education, vol. 19, no. 4, pp. 431–43.

Ballantine, JA, McCourt Larres, P & Oyelere, P 2007, ‘Computer usage and the validity of self-
assessed computer competence among first-year business students’, Computers &
Education, vol. 49, no. 4, pp. 976–90.

Bandura, A 1977, ‘Self-efficacy: Toward a unifying theory of behavioural change’,


Psychological Review, vol. 84, no. 2, pp. 191–215.

---- 1982, ‘Self-efficacy mechanism in human agency’, American Psychologist, vol. 37, no. 2,
pp. 122–47.

---- 1991, ‘Social Cognitive Theory of Self-Regulation’, Organizational Behavior and Human
Decision Processes, vol. 50, no. 2, pp. 248–87.

---- 1994, ‘Self-efficacy’, in VS Ramachandran (ed.), Encyclopedia of human behavior,


Academic Press, New York, vol. 4, pp. 71–81.

---- 1997, Self-efficacy: the exercise of control, WH Freeman and Company, New York.

Bateman, A & Griffin, P 2003, ‘The Appropriateness of Professional Judgement to Determine


Performance Rubrics in a Graded Competency Based Assessment Framework’, paper
presented to International Education Research Conference AARE-NZARE, Auckland,
New Zealand, 30 Nov–3 Dec.

Bechger, TM, Maris, G, Verstralen, HHFM & Béguin, AA 2003, ‘Using Classical Test Theory
in Combination with Item Response Theory’, Applied Psychological Measurement, vol.
27, no. 5, pp. 319–34.

Becker, HJ & Ravitz, JL 2001, ‘Computer Use by Teachers: Are Cuban’s Predictions Correct?’,
paper presented to Annual Meeting of the American Educational Research Association
Seattle, March, <http://www.crito.uci.edu/tlc/findings/conferences-pdf/aera_2001.pdf>.

BECTA June 2004, ‘A review of the research literature on barriers to the uptake of ICT by
teachers’, BECTA ICT Research, viewed 23 April 2009,
<http://partners.becta.org.uk/upload-
dir/downloads/page_documents/research/barriers.pdf>.

Bennett, S, Maton, K & Kervin, L 2008, ‘The “digital natives” debate: a critical review of the
evidence’, British Journal of Educational Technology, vol. 39, no. 5, pp. 775–86.

Investigating ICT-literacy assessment tool:


Page 192 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Betz, NE & Turner, BM 2011, ‘Using Item Response Theory and Adaptive Testing in Online
Career Assessment’, Journal of Career Assessment, vol. 19, no. 3, pp. 274–86.

Bhatnagar, G & Kandan, M 2000, ‘Performance Based Tests: A Case Study’, paper presented to
International Conference on Cognitive Systems, New Delhi, 15–17 December 1999,
<http://www.kandan.org.in/articles/performancebasedtesting.htm>.

Bloom, BS 1956, Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain,


David McKay Co. Inc., New York.

Bond, TG & Fox, CM 2007, Applying the Rasch Model: Fundamental Measurement in the
Human Sciences, 2nd edn, L Erlbaum, Mahwah, NJ; London.

Boud, D & Falchikov, N 1989, ‘Quantitative studies of student self-assessment in higher


education: a critical analysis of findings’, Higher Education, vol. 18, no. 5, pp. 529–49.

Braddlee, D & Matthews-DeNatale, G 2006, Fluency in Information Technology (FIT): Setting


Expectations and Understanding Students’ Learning Needs, viewed 11 May 2011,
<http://www.educause.edu/Resources/FluencyinInformationTechnology/159657>.

Bradley, G 2006, Social and community informatics: humans on the net, Routledge, London.

Bruner, JS 2006, In search of pedagogy, Taylor & Francis Ltd.

Buettner, Y, Duchâteau, C, Fulford, C, Hogenbirk, P, Kendall, M & Morel, R 2000, Information


and communication technology in secondary education: a curriculum for schools,
UNESCO/IFIP Curriculum – Information and Communication Technology in
Secondary Education.

Bunz, U 2004, ‘The Computer-Email-Web (CEW) Fluency Scale – Development and


Validation’, International Journal of Human-Computer Interaction, vol. 17, no. 4, pp.
479–506.

Cadena, S 2010, Dili Village Telco. Rowetel, Australia and Timor Leste, Information Society
Innovation Fund – ISIF Asia, viewed 5 April 2011,
<http://isif.asia/groups/isif/weblog/ba6a2/Dili_Village_Telco._Rowetel__Australia_and
_Timor_Leste.html>.

Callingham, R 2003, ‘Establishing the Validity of a Performance Assessment in Numeracy’,


paper presented to the International Education Research Conference AARE-NZARE,
Auckland, New Zealand, 30 Nov–3 Dec.

Calvani, A, Cartelli, A, Fini, A & Ranieri, M 2008, ‘Models and instruments for accessing
digital competence at school’, Journal of e-Learning and Knowledge Society, vol. 4, no.
3, pp. 183–93.

Caplan, D & Graham, R 2008, The Development of Online Courses, ed. T Anderson, AU Press,
Athabasca University, viewed 15 January 2011,
<http://www.aupress.ca/index.php/books/120146>.

Cartelli, A 2008, ‘Digital competence and web technologies: analysis of a research project and
its instruments’, paper presented to Informing Science & IT Education Conference
(InSITE) 2008, Varna, Bulgaria, 22–25 June 2008.

Chan, FM 2002a, ‘Developing information literacy in the Malaysian Smart Schools: resource-
based learning as a tool to prepare today’s students for tomorrow’s society’,
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 193
International Association of School Librarianship. Selected Papers from the ... Annual
Conference, p. 203.

---- 2002b, ‘ICT in Malaysian Schools: policy and strategies’, paper presented to
Seminar/Workshop on the Promotion of ICT Education to Narrow the Digital Divide,
Tokyo, Japan, 15–22 October 2002, <gauge.u-
gakugei.ac.jp/apeid/apeid02/papers/Malaysia.doc>.

Chen, C-M, Lee, H-M & Chen, Y-H 2005, ‘Personalized e-learning system using Item Response
Theory’, Computers & Education, vol. 44, no. 3, pp. 237–55.

Christensen, R 2002, ‘Effects of technology integration education on the attitudes of teachers


and students’, Journal of Research on Technology in Education, vol. 34, no. 4, p. 411.

Christensen, R & Knezek, G 1996, ‘Constructing the Teachers’ Attitudes Toward Computers
(TAC) Questionnaire’.

---- 2002, ‘Instruments for Assessing the Impact of Technology in Education’, Computers in the
Schools, vol. 18, no. 2, pp. 5–25.

Clark, DR 2004, Instructional System Design Concept Map, viewed 7 December 2008,
<http://nwlink.com/~donclark/hrd/ahold/isd.html>.

Clarkson, B & Oliver, R 2002, ‘A Typology for Identifying Teachers’ Progress in ICT Uptake’,
paper presented to ED-MEDIA 2002 World Conference on Educational Multimedia,
Hypermedia & Telecommunications, Denver, Colorado, 24–29 June 2002.

Cohen, L, Manion, L & Morrison, K 2007, Research methods in education, Routledge, London.

Compeau, DR & Higgins, CA 1995, ‘Application of social cognitive theory to training for
computer skills’, Information Systems Research, vol. 6, no. 2, pp. 118–42, viewed 17
September 2008.

Computer Training Centre UCC 2012, ECDL Costs, University College Cork, viewed 30 June
2012, <http://www.ucc.ie/en/tcentre/ecdl/costs/>.

Creswell, J 2003, Research design: qualitative, quantitative, and mixed method approaches, 2nd
edn, Sage Publications, Thousand Oaks.

Creswell, JW & Plano Clark, VL 2007, Designing and conducting mixed methods research,
SAGE Publications, Thousand Oaks, California.

Cuckle, P & Clarke, S 2002, ‘Mentoring student-teachers in schools: views, practices and access
to ICT’, Journal of Computer Assisted Learning, vol. 18, pp. 330–40.

Culp, KM, Hawkins, J & Honey, M 1999, ‘Review Paper on Educational Technology Research
and Development’, viewed 13 August 2008,
<http://cct.edc.org/admin/publications/policybriefs/research_rp99.pdf>.

Cyr, A & Davies, A 2005, ‘Item Response Theory and Latent Variable Modeling for Surveys
with Complex Sampling Design: The Case of the National Longitudinal Survey of
Children and Youth in Canada’, paper presented to 2005 Federal Committee on
Statistical Methodology (FCSM) Research Conference, Arlington, VA, 14–16
November 2005.

Investigating ICT-literacy assessment tool:


Page 194 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Dakich, E 2008, ‘Towards the social practice of digital pedagogies’, in N Yelland, GA Neal & E
Dakich (eds), Rethinking education with ICT: New directions for effective practices,
Sense Publishers, Rotterdam, The Netherlands, pp. 13–30.

Darling-Hammond, L, Rosso, J, Austin, K, Orcutt, S & Martin, D 2001, ‘How People Learn:
Introduction to Learning Theories’, viewed 3 June 2012,
<http://www.stanford.edu/class/ed269/hplintrochapter.pdf>.

Dawes, L 2000, ‘The national grid for learning and the professional development of teachers:
outcomes of an opportunity for dialogue’, PhD thesis, De Montford University.

Department of Education Employment and Workplace Relations 2010, Digital Education


Revolution Projects, Infrastructure and Support, DEEWR, Australian Government.

---- 2011, Experience the Digital Education Revolution, viewed 29 March 2012,
<http://www.deewr.gov.au/Schooling/DigitalEducationRevolution/Pages/default.aspx>.

Department of Education Science and Training 2000, Teachers for the 21st Century: making the
difference, Department of Education, Employment and Workspace Relations, Australia.

Dictionary.com 2009, learning, The American Heritage® Stedman’s Medical Dictionary,


viewed 17 September 2008, <http://dictionary.reference.com/browse/learning>.

Drucker, P 1994, ‘The Age of Social Transformation’, The Atlantic Monthly, vol. 274, no. 5, pp.
53–76.

---- 1999, Management Challenges for the 21st Century, Butterworth-Heinemann, Oxford.

---- 2001, ‘The Next Society’, The Economist, Nov 3rd.

Dundon, T & Ryan, P 2010, ‘Interviewing Reluctant Respondents: Strikes, Henchmen and
Gaelic Games’, Organizational Research Methods, vol. 13, no. 3, pp. 562–81.

Durndell, A & Haag, Z 2002, ‘Computer self-efficacy, computer anxiety, attitudes towards the
Internet and reported experience with the Internet, by gender, in an East European
sample’, Computers in Human Behavior, vol. 18, no. 5, pp. 521–35.

Eastman, C & Marzillier, JS 1984, ‘Theoretical and methodological difficulties in Bandura’s


self-efficacy theory’, Cognitive Therapy and Research, vol. 8, no. 3, pp. 213–29.

ECDL Foundation 2013, ECDL Programmes, viewed 20 February 2013,


<http://www.ecdl.org/programmes/index.jsp>.

ED.gov 2004, Elementary & Secondary Education: Legislation, viewed 29 March 2012,
<http://www2.ed.gov/policy/elsec/leg/esea02/pg34.html>.

Eisenberg, M, Johnson, D & Berkowitz, B 2010, ‘Information, Communications, and


Technology (ICT) Skills Curriculum Based on the Big6 Skills Approach to Information
Problem Solving’, Library Media Connection, vol. 28, no. 6, pp. 24–7.

Embretson, SE & Reise, SP 2000, Item Response Theory for Psychologists, Lawrence Erlbaum
Associates, Mahwah, NJ.

ETS.org 2008, iSkills, viewed 23 April 2009, <http://www.ets.org/iskills>.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 195
European Commission 2007, Key competences for lifelong learning: a European reference
framework, Directorate-General for Education and Culture.

Fan, X 1998, ‘Item Response Theory and Classical Test Theory: An Empirical Comparison of
Their Item/Person Statistics’, Educational and Psychological Measurement, vol. 58, no.
3, pp. 357–81.

Field, J 2006, Lifelong Learning and the New Educational Order, Trentham Books Limited,
Staffordshire.

Fiske, DW 2002, ‘Validity for What?’, in HI Braun, DN Jackson & DE Wiley (eds), The Role of
Constructs in Psychological and Educational Measurement, Lawrence Erlbaum
Associates Inc., New Jersey, pp. 179–90.

Forster, PA, Dawson, VM & Reid, D 2005, ‘Measuring preparedness to teach with ICT’,
Australasian Journal of Educational Technology, vol. 21, no. 1, pp. 1–18.

Gagne, RM 1985, The conditions of learning and theory of instruction, 4th edn, Holt, Rinehart
and Winston Inc., USA.

---- 2000, ‘Domains of Learning’, in R Richey (ed.), The Legacy of Robert M Gagne, Eric
Clearinghouse on Information and Technology, Syracuse, NY, pp. 87–105.

Goldschmidt, PG 1986, ‘Information Synthesis: A Practical Guide’, Health Services Research,


vol. 21, no. 2, pp. 215–37.

Graham, B & Glen, R 1997, ‘Computer experience, school support and computer anxieties’,
Educational Psychology, vol. 17, no. 3, p. 267.

Griffin, PE & Nix, P 1991, Educational Assessment and Reporting: A New Approach, Harcourt
Brace Jovanovich, New South Wales.

Hambleton, RK & Jones, RW 1993, ‘Comparison of Classical Test Theory and Item Response
Theory and Their Applications to Test Development’, ITEMS: Instructional Topics in
Educational Measurement, pp. 38–47, viewed 29 March 2012,
<http://ncme.org/linkservid/66968080-1320-5CAE-
6E4E546A2E4FA9E1/showMeta/0/>.

Hambleton, RK & Murphy, E 1991, A Psychometric Perspective on Authentic Measurement.

Havelka, D 2003, ‘Predicting software self-efficacy among business students: A preliminary


assessment’, Journal of Information Systems Education, vol. 14, no. 2, pp. 145–52.

Hepp, PK, Hinostroza, ES, Laval, EM & Rehbein, LF 2004, Technology in Schools: Education,
ICT and the Knowledge Society, The World Bank.

Hignite, M, Margavio, TM & Margavio, GW 2009, ‘Information literacy assessment: moving


beyond computer literacy’, College Student Journal, vol. 43, no. 3, viewed 20 June
2012, <http://www.freepatentsonline.com/article/College-Student-
Journal/206687075.html>.

Hilberg, JS & Meiselwitz, G 2008, ‘Undergraduate fluency with information and


communication technology: perceptions and reality’, paper presented to The 9th ACM
SIGITE Conference on Information Technology Education, Cincinnati, OH, USA.

Investigating ICT-literacy assessment tool:


Page 196 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Hill, WF 2002, Learning: A survey of psychological interpretation, 7th edn, Allyn and Bacon,
Boston.

Holley, J 2008, ‘Generation Y: understanding the trend and planning for the impact’, paper
presented to 32nd Annual IEEE International Computer Software and Applications
Conference, Turku, Finland, 28 July–1 August 2008,
<http://conferences.computer.org/compsac/2008/pdf/KEY-COMPSAC-jean-holley-
GenYTrends.pdf>.

Hong, KS, Abang Ekhsan, O & Zaimuarifuddin Shukri, N 2005, ‘Computer Self-Efficacy,
Computer Anxiety, and Attitudes Towards the Internet: A Study among Undergraduates
in Unimas’, Educational Technology & Society, vol. 8, no. 4, pp. 205–19.

International ICT Literacy Panel 2002, Digital transformation: a framework for ICT literacy,
Educational Testing Service.

International Society for Technology in Education 2008, The ISTE National Educational
Technology Standards and Performance Indicators for Teachers, viewed 13 August
2008,
<http://www.iste.org/Content/NavigationMenu/NETS/ForTeachers/2008Standards/NET
S_T_Standards_Final.pdf>.

Istance, D & Kools, M 2013, ‘OECD Work on Technology and Education: innovative learning
environments as an integrating framework’, European Journal of Education, vol. 48,
no. 1, pp. 43–57.

ISTE 2008, NETS for teachers 2008, International Society for Technology in Education, viewed
6 August 2009,
<http://www.iste.org/Content/NavigationMenu/NETS/ForTeachers/2008Standards/NET
S_T_Standards_Final.pdf>.

Izard, J 2005, ‘Trial testing and item analysis in test construction’, in KN Ross (ed.),
Quantitative Research Methods in Educational Planning, UNESCO, Paris.

Jamieson-Proctor, RM, Burnett, PC, Finger, G & Watson, G 2006, ‘ICT integration and
teachers’ confidence in using ICT for teaching and learning in Queensland state
schools’, Australasian Journal of Educational Technology, vol. 22, no. 4, pp. 511–30,
viewed 23 April 2009, <http://www.ascilite.org.au/ajet/ajet22/jamieson-proctor.html>.

Jonassen, DH 1991, ‘Evaluating constructivistic learning’, Educ. Technol., vol. XXXI, no. 9, pp.
28–33.

Jones, RW & Hambleton, RK 1992, Recent Advances in Psychometric Methods.

Karsten, R & Schmidt, D 2008, ‘Business Student Computer Self-Efficacy: Ten Years Later’,
Journal of Information Systems Education, vol. 19, no. 4, pp. 445–51.

Katz, IR 2007, ‘Testing Information Literacy in Digital Environments: ETS’s iSkills


Assessment’, Information Technology and Libraries, vol. 26, no. 3, p. 3.

Katz, IR & Macklin, AS 2007, ‘Information and Communication Technology (ICT) literacy:
integration and assessment in higher education’, Journal of Systematics, Cybernetics
and Informatics, vol. 5, no. 4, pp. 50–5.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 197
Kim, J & Lee, W 2013, ‘Meanings of criteria and norms: Analyses and comparisons of ICT
literacy competencies of middle school students’, Computers & Education, vol. 64, pp.
81–94.

Knezek, G & Christensen, R 2002, ‘Impact of New Information Technologies on Teachers and
Students’, Education and Information Technologies, vol. 7, no. 4, pp. 369–76.

Kotlarsky, J & Oshri, I 2005, ‘Social ties, knowledge sharing and successful collaboration in
globally distributed system development projects’, European Journal of Information
Systems, vol. 14, pp. 37–48.

Krathwohl, DR 2002, ‘A Revision of Bloom’s Taxonomy: An Overview’, Theory into Practice,


vol. 41, no. 4, pp. 212–8.

Kurbanoglu, SS, Buket, A & Aysun, U 2006, ‘Developing the information literacy self-efficacy
scale’, Journal of Documentation, vol. 62, no. 6.

Larres, PM, Ballantine, JA & Whittington, M 2003, ‘Evaluating the validity of self-assessment:
measuring computer literacy among entry-level undergraduates within accounting
degree programmes at two UK universities’, Accounting Education: An International
Journal, vol. 12, no. 2, pp. 97–112.

Lemke, C 2002, enGauge 21st Century Skills: Digital Literacies for a Digital Age, North
Central Regional Educational Laboratory,
<http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED463753>.

Leu, DJJ, Kinzer, CK, Coiro, J & Cammack, DW 2004, ‘Toward a Theory of New Literacies
Emerging from the Internet and Other Information and Communication Technologies’,
in RB Ruddell & N Unrau (eds), Theoretical Models and Processes of Reading, 5th edn,
International Reading Association, Newark, pp. 1570–613.

Levin, D & Arafeh, S 2002, The digital disconnect: the widening gap between Internet-savvy
students and their schools, Pew Internet & American Life Project, Washington DC.

Linn, RL, Baker, EL & Dunbar, SB 1991, ‘Complex, Performance-Based Assessment:


Expectations and Validation Criteria’, Educational Researcher, vol. 20, no. 8, pp. 15–
21.

Linstone, HA & Turoff, M 2002, The Delphi Method: techniques and applications, eds HA
Linstone & M Turoff, viewed 20 July 2009,
<http://www.is.njit.edu/pubs/delphibook/delphibook.pdf>.

Livingstone, S 2004, ‘Media Literacy and the Challenge of New Information and
Communication Technologies’, The Communication Review, vol. 7, no. 1, pp. 3–14.

Long, MH & Crookes, G 1992, ‘Three Approaches to the Task-based Syllabus Design’, TESOL
Quarterly, vol. 26, no. 1, pp. 27–56.

Luke, A 2001, ‘Introduction to whole-school literacy planning. How to make literacy policy
differentially: Generational change, professionalisation, and literate futures’, paper
presented to ALEA-AATE Conference, Hobart, Tasmania.

Magno, C 2009, ‘Demonstrating the Difference Between Classical Test Theory and Item
Response Theory Using Derived Test Data’, The International Journal of Education
and Psychological Assessment, vol. 1, no. 1, pp. 1–11.

Investigating ICT-literacy assessment tool:


Page 198 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Mahathir, M 1991, The way forward – Vision 2020, viewed 23 April 2009,
<http://www.wawasan2020.com/vision/>.

Markauskaite, L 2005a, ‘From a static to dynamic concept: a model of ICT literacy and an
instrument for self-assessment’, paper presented to Fifth IEEE International Conference
on Advanced Learning Technologies (ICALT’05).

---- 2005b, ‘Notions of ICT literacy in Australian school education’, Informatics in Education,
vol. 4, no. 2, pp. 253–80.

---- 2006, ‘Gender issues in preservice teachers’ training: ICT literacy and online learning’,
Australasian Journal of Educational Technology, vol. 22, no. 1, pp. 1–20, viewed 23
April 2009, <http://www.ascilite.org.au/ajet/ajet22/markauskaite.html>.

---- 2007, ‘Exploring the structure of trainee teachers’ ICT literacy: the main components of,
and relationships between, general cognitive and technical capabilities’, Educational
Technology, Research and Development, vol. 55, no. 6, p. 547.

Mason, D, Moulton, M, Russell, D & Wilmot, D 2009, Three Facets of Formative Assessment:
How to Revolutionize (and actually use) Locally Developed Tests, Santa Clara County
Office of Education.

Masters, GN 1982, ‘A Rasch Model for Partial Credit Scoring’, PSYCHOMETRIKA, vol. 47, no.
2, pp. 149–74.

---- 1999, ‘Partial Credit Model’, in GN Masters & JP Keeves (eds), Advances in Measurement
in Educational Research and Assessment, Elsevier Science Ltd, Oxford, UK.

Masters, GN & Keeves, JP (eds) 1999, Advances in Measurement in Educational Research and
Assessment, Elsevier Science Ltd, Oxford, UK.

Mayer, RE 2009, ‘Constructivism as a theory of learning versus constructivism as a prescription


for instruction’, in S Tobias & TM Duffy (eds), Constructivist instruction: Success or
failure?, Routledge/Taylor & Francis Group, New York, NY, US, pp. 184–200.

McKay, E 2000, ‘Instructional strategies integrating cognitive style construct: a meta-


knowledge processing model (contextual components that facilitate spatial/logical task
performance)’, PhD thesis, Deakin University.

---- 2005, ‘Human-Computer Interaction: Perils of Ubiquitous Information and Communications


Technologies’, paper presented to The 9th World Multi-Conference on Systemics,
Cybernetics and Informatics (WMSCI 2005), Orlando, Florida, 10–13 July 2005.

---- 2008, The Human Dimensions of Human-Computer Interaction: Balancing the HCI
Equation, vol. 3, The Future of Learning, IOS Press, Netherland.

McNaught, C 2006, ‘The synergy between information literacy and eLearning’, in HS Ching,
PWT Poon & C McNaught (eds), eLearning and digital publishing, Springer,
Dordrecht, pp. 29–43.

Megat Aman Zahiri, MZ, Baharuddin, A & Jamalludin, H 2007, ‘Kemahiran ICT di Kalangan
Guru-guru Pelatih UTM: Satu Tinjauan’, paper presented to 1st International Malaysian
Educational Technology Convention, Sofitel Palm Resort, Senai, 2–5 November 2007.

Mehrens, WA 1992, ‘Using Performance Assessment for Accountability Purposes’, Educational


Measurement: Issues and Practice, vol. 11, no. 1, pp. 3–9.
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 199
Mergel, B 1998, ‘Instructional Design and Learning Theory’, Occasional Papers in Educational
Technology, viewed 3 June 2012,
<http://www.usask.ca/education/coursework/802papers/mergel/brenda.htm>.

Meriam Library CSU Chico 2010, Evaluating Information – Applying the CRAAP Test, viewed
28 July 2012, <http://www.csuchico.edu/lins/handouts/eval_websites.pdf>.

Merrill, DM, Li, Z & Jones, MK 1990, ‘Second generation instructional design (ID2)’, Educ.
Technol., vol. 30, no. 2, pp. 7–14.

Merrill, DM, Tennyson, RD & Posey, LO 1992, Teaching Concepts: An Instructional Design
Guide, 2nd edn, Educational Technology Publications, Englewood Cliffs, NJ.

Messick, S 1988, ‘The Once and Future Issues of Validity: Assessing the Meaning and
Consequences of Measurement’, in H Wainer & HI Braun (eds), Test Validity,
Lawrence Erlbaum Associates Inc., New Jersey.

---- 1996, ‘Validity of Performance Assessments’, in GW Phillips (ed.), Technical Issues in


Large-Scale Performance Assessment, US Government Printing Office, Washington,
pp. 1–18.

Mullen, PM 2003, ‘Delphi: myths and reality’, Journal of Health Organisation and
Management, vol. 17, no. 1, pp. 37–52.

Multimedia Development Corporation 2005, Malaysian Smart School Roadmap 2005–2020: An


Educational Odyssey.

---- 2007a, The Malaysian Smart School, viewed 03 March 2010,


<http://web3.mscmalaysia.my/smartschool/overview/index.asp>.

---- 2007b, The Malaysian Smart School, viewed 12 January 2010,


<http://www.msc.com.my/smartschool/whatis/implementation.asp>.

Murphy, CA, Coover, D & Owen, SV 1989, ‘Development and Validation of the Computer
Self-Efficacy Scale’, Educational and Psychological Measurement, vol. 49, no. 4, pp.
893–9.

NAE & NRC 2002, Technically Speaking: Why All Americans Need to Know More About
Technology, National Academy Press, Washington, DC.

---- 2006, Tech tally: approaches to assessing technological literacy, National Academies Press,
Washington, DC.

‘No Child Left Behind’, 2004, Education Week, viewed 29 March 2012,
<http://www.edweek.org/ew/issues/no-child-left-behind/>.

Noor Azizi, I & Basariah, S 2005, ‘Perceptions of Accounting Academicians Towards the Issue
of Information Technology Integration into Accounting Curriculum’, Jurnal
Penyelidikan Pendidikan, vol. 7, pp. 96–112.

Northern Territory Government 2009, Learning Technology, Northern Territory Government.

Nunnally, JC & Bernstein, IH 1994, Psychometric Theory, 3rd edn, McGraw-Hill Inc., USA.

Investigating ICT-literacy assessment tool:


Page 200 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Obinne, ADE 2011, ‘A Psychometric Analysis of Two Major Examinations in Nigeria: Standard
Error of Measurement’, International Journal of Educational Science, vol. 3, no. 2, pp.
137–44.

Pajares, F 2002, ‘Overview of Social Cognitive Theory and of Self-Efficacy’, viewed 16


February 2009, <http://www.des.emory.edu/mfp/eff.html>.

Partnership for the 21st Century Skills 2002, Learning for the 21st Century, Washington DC.

Pearlman, B 2006, ‘Students Thrive on Cooperation and Problem Solving’, edutopia,


<http://www.edutopia.org/new-skills-new-century>.

Pearson 2007, Our Business and Society 2006, viewed 21 December 2008,
<http://www.pearson.com/community/csr_report2006/businesses.html>.

Pearson Education 2011, SchoolNet, viewed 30 May 2012 2012, <http://www.schoolnet.com/>.

Pernia, EE 2008, Strategy framework for promoting ICT literacy in the Asia-Pacific region,
UNESCO Bangkok, Bangkok.

Prabhu, NS 1987, Second Language Pedagogy, Oxford University Press, Oxford.

Prensky, M 2001a, ‘Digital natives, digital immigrants’, On the Horizon, vol. 9, no. 5, pp. 1–6.

---- 2001b, ‘Digital natives, digital immigrants, part 2: Do they really think differently?’, On the
Horizon, vol. 9, no. 6 pp. 1–6.

---- 2005, ‘“Engage me or enrage me”: what today’s learners demand’, EDUCAUSE review, vol.
40, no. 5, pp. 60–4.

Prentice Hall 2008, Train & Assess IT Generation, viewed 21 December 2008,
<http://www2.phgenit.com/support/support/HomeContent.asp>.

Punie, Y 2007, ‘Learning spaces: an ICT-enabled model of future learning in the knowledge-
based society’, European Journal of Education, vol. 42, no. 2, pp. 185–99.

Punie, Y & Cabrera, M 2005, The Future of ICT and Learning in the Knowledge Society,
Institute for Prospective Technological Studies, Spain.

Rajendran, N 2001, ‘The Teaching of Higher-order Thinking Skills in Malaysia’, Journal of


Southeast Asia Education, vol. 2, no. 1, pp. 42–65.

Razali, S 1999, ‘Kajian penggunaan KBKK dalam mata pelajaran matematik KBSM tingkatan 4
Sekolah Menengah Daerah Tumpat, Kelantan [A study on the use of critical and
creative thinking skills in form four KBSM mathematics subjects in Tumpat District
Secondary School in Kelantan]’, BEd thesis, Universiti Teknologi Malaysia.

Reigeluth, CM (ed.) 1983, Instructional Design Theories and Models: An Overview of Their
Current Status, Lawrence Erlbaum Associates Hillsdale, NJ.

Reigeluth, CM & Keller, JB 2009, ‘Understanding instruction’, in CM Reigeluth & AA Carr-


Chellman (eds), Instructional design theories and models: Building a common
knowledge base, Taylor & Francis, New York, vol. 3, pp. 27–39.

Revelle, W 2011, ‘The “New Psychometrics” – Item Response Theory’, 8, <http://personality-


project.org/r/book/Chapter8.pdf>.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 201
Robbins, R & Zhou, Z 2007, ‘A Comparison of Two Computer Literacy Testing Approaches’,
Issues in Information Systems, vol. VIII, no. 1, pp. 185–91, viewed 19 November 2008,
<http://www.iacis.org/iis/2007_iis/PDFs/Robbins_Zhou.pdf>.

Robinson, P & Ross, S 1996, ‘The Development of Task-based Assessment in English for
Academic Purposes Programs’, Applied Linguistics, vol. 17, no. 4, pp. 455–76.

Rosnani, H 2002, ‘Investigation on the Teaching of Critical and Creative Thinking in Malaysia’,
Jurnal Pendidikan Islam, vol. 10, no. 1, pp. 39–56.

Russell, G & Finger, G 2007, ‘ICTs and Tomorrow’s Teachers: Informing and Improving the
ICT Undergraduate Experience’, in Handbook of Teacher Education, pp. 625–40.

Ryan, J & Williams, J 2007, ‘MATHSMAPS For Diagnostic Assessment with Pre-Service
Teachers: Stories of Mathematical Knowledge’, Research in Mathematics Education,
vol. 9, no. 1, pp. 95–109.

SDSU Library & Information Access 2011, Evaluating Information, viewed 28 July 2012,
<http://library.sdsu.edu/reference/research/evaluating-information>.

Sekaran, U 2002, Research methods for business: A skill-building approach, Wiley Publishing.

Shahadat, HKM, Hasan, M & Clement, CK 2012, ‘Barriers to the Introduction of ICT into
Education in Developing Countries: The Example of Bangladesh’, International
Journal of Instruction, vol. 5, no. 2, pp. 61–80.

Shattuck, D, Corbell, KA, Osbourne, JW, Knezek, G, Christensen, R & Grable, LL 2011,
‘Measuring Teacher Attitudes Toward Instructional Technology: A Confirmatory Factor
Analysis of the TAC and TAT’, Computers in the Schools, vol. 28, no. 4, pp. 291–315.

Sick, J 2010, ‘Assumptions and requirements of Rasch measurement’, SHIKEN: JALT Testing
& Evaluation SIG Newsletter, vol. 14, no. 2, pp. 23–9, viewed 21 June 2011.

Skehan, P 1996, ‘A Framework for the Implementation of Task-based Instruction’, Applied


Linguistics, vol. 17, no. 1, pp. 38–62.

Smart School Project Team 1997, The Malaysian Smart School: A Conceptual Blueprint,
Ministry Of Education, Malaysia.

Smee, S 2003, ‘Skill-based assessment’, BMJ, vol. 326, no. 7391, pp. 703–6.

Speckler, MD 2006, Raising the Bar: A Report on the Success of Train & Assess IT in Higher
Education Microsoft Office Instruction, Pearson Education.

Stocking, ML 1999, ‘Item Response Theory’, in GN Masters & JP Keeves (eds), Advances in
Measurement in Educational Research and Assessment, Elsevier Science Ltd, Oxford,
UK, pp. 55–63.

Swaminathan, H 1999, ‘Latent Trait Measurement Models’, in JP Keeves & GN Masters (eds),
Advances in Measurement in Educational Research and Assessment, Elsevier Science
Ltd., Oxford, UK, pp. 43–54.

Swanson, DB, Norman, GR & Linn, RL 1995, ‘Performance-based Assessment: Lessons from
the Health Professions’, Educational Researcher, vol. 24, no. 5, pp. 5–11.

Tapscott, D 1998, Growing up digital: the rise of the net generation, McGraw-Hills, New York.

Investigating ICT-literacy assessment tool:


Page 202 Developing and validating a new assessment instrument for trainee teachers in Malaysia
---- 2009, Grown up digital: how the net generation is changing your world, McGraw-Hills,
New York.

Teachnology Inc 2011, Performance-based Assessment, viewed 6 May 2011,


<http://www.teach-nology.com/>.

Tennyson, RD 2012, ‘Historical Reflection on Learning Theories and Instructional Design’,


Contemporary Educational Technology, vol. 1, no. 1, pp. 1–16.

The Association of College and Research Libraries 2000, Information Literacy Competency
Standards for Higher Education, Chicago, Illinois.

The World Bank 2003, Lifelong Learning in the Global Knowledge Economy: Challenges for
Developing Countries, The World Bank, Washington.

Thompson, SV 1990, ‘Visual Imagery: a discussion’, Educational Psychology: An International


Journal of Experimental Educational Psychology, vol. 10, no. 2, pp. 141–67.

Torkzadeh, G & van Dyke, TP 2001, ‘Development and validation of an Internet self-efficacy
scale’, Behaviour & Information Technology, vol. 20, no. 4, pp. 275–80.

Tsai, C-C & Chai, CS 2012, ‘The “third”-order barrier for technology-integration instruction:
Implications for teacher education’, Australasian Journal of Educational Technology,
vol. 28, no. 6, pp. 1057–60.

UPSI 2010, The History: Distinctively Swathed in the Legacy of Three Generations, viewed 2
December 2010, <http://www.upsis.edu.my/index.php/en/main-page/upsi-overview/the-
history.html>.

Vaglio-Laurin, MW 2006, ‘Don’t Just Tell Us – Show Us! Performance-Based Testing and the
SAS® Certified Professional Program’, paper presented to The Thirty-first Annual
SAS® Users Group International Conference, Cary, NC, 26–29 March 2006.

Van der Linden, WJ & Hambleton, RK 1997, Handbook of Modern Item Response Theory,
Springer.

van Teijlingen, E & Hundley, V 2001, ‘The importance of pilot studies’, Social Research
Update, no. 35.

WebAttack Inc. 2010, Screen2exe, viewed 2 December 2010,


<http://www.snapfiles.com/get/screen2exe.html>.

Weiser, M 1999, ‘The computer for the 21st century’, SIGMOBILE Mob. Comput. Commun.
Rev., vol. 3, no. 3, pp. 3–11, viewed 23 April 2009, DOI
http://doi.acm.org/10.1145/329124.329126.

Williamson, DM, Katz, IR & Kirsch, I 2005, ‘An overview of the higher education ICT literacy
assessment’, Assessing Higher Ed ICT, viewed 23 April 2009,
<http://www7.nationalacademies.org/bose/ICT%20Fluency_Assessment_Overview_Art
icle.pdf>.

Wilson, B 1990, ‘The Preparedness of Teacher Trainees for Computer Utilisation: the
Australian and British Experiences’, Journal of Education for Teaching: International
research and pedagogy, vol. 16, no. 2, pp. 161–71.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 203
Wilson, M 1992, ‘Measuring Levels of Mathematical Understanding’, in TA Romberg (ed.),
Mathematics Assessment and Evaluation: Imperatives for Mathematics Educators, State
University of New York Press, New York, pp. 213–41.

Wong, SL 2002, ‘Development and Validation of an Information Technology (IT) based


Instrument to Measure Teachers’ IT Preparedness’, PhD thesis, Universiti Putra
Malaysia.

Wong, SL, Sidek, AA, Aida Suraya, MY, Zakaria, S, Kamariah, AB, Hamidah, M & Hanafi, A
2005, ‘Gender Differences in ICT Competencies among Academicians at Universiti
Putra Malaysia’, Malaysian Online Journal of Instructional Technology, vol. 2, no. 3,
pp. 62–9.

Wood, R & Bandura, A 1989, ‘Social Cognitive Theory of Organizational Management’, The
Academy of Management Review, vol. 14, no. 3, pp. 361–84.

World Links 2010, World Links, viewed 30 May 2012, <http://www.world-links.org/>.

Wright, BD 1999, ‘Rasch Measurement Models’, in GN Masters & JP Keeves (eds), Advances
in Measurement in Educational Research and Assessment, Elsevier Science Ltd,
Oxford.

Wright, BD & Masters, GN 1982, Rating Scale Analysis, MESA Press, Chicago, Illinois.

Wu, M & Adams, R 2007, Applying the Rasch Model to Psycho-social Measurement: A
Practical Approach, Educational Measurement Solutions,
<http://www.edmeasurement.com.au/Learning.html>.

Young, MJ 2003, ‘Human performance model validation: One size does not fit all’, paper
presented to Summer Computer Simulation Conference 2003, Montreal, Canada, 20–24
July 2003.

Yuan, R 2005, ‘Chinese Language Learning and the Rasch Model: Measurement of Students’
Achievement in Learning Chinese’, in S Alagumalai, DD Curtis & N Hungi (eds),
Applied Rasch Measurement: A Book of Exemplars, Springer, the Netherlands, pp. 115–
37.

Zaharah, H 1995, ‘Analisis kandungan kemahiran berfikir kritis dalam buku teks pendidikan
Islam KBSM [Analysis of the critical thinking skills content in the KBSM Islamic
Studies text books]’, MEd thesis, Universiti Malaya.

Zain, MZM, Atan, H & Idrus, RM 2004, ‘The impact of information and communication
technology (ICT) on the management practices of Malaysian Smart Schools’,
International Journal of Educational Development, vol. 24, no. 2, pp. 201–11.

Zainudin, AB 2008, Kemahiran ICT di kalangan guru pelatih IPTA Malaysia [ICT skills among
trainee teachers in Malaysian public institutes of higher learning], Faculty of
Education, University Technology of Malaysia.

Zhang, Z & Martinovic, D 2008, ‘Teacher Candidates’ Needs, Expectations of, and Attitudes
toward Information and Communication Technologies (ICT) Learning and Integration’,
paper presented to Society for Information Technology and Teacher Education
International Conference 2008, Las Vegas, Nevada, USA.

Zikmund, WG, Babin, BJ, Carr, JC & Griffin, M 2010, Business research methods, 8th edn,
South-Western Cengage Learning, USA.
Investigating ICT-literacy assessment tool:
Page 204 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Glossary Of Terms

1-9
1 Parameter Logistic The simplest IRT model, often called the Rasch model. An
individual’s response to a binary item is determined by the individual’s trait level and the
difficulty of the item.
A
Affective domain Refers to attitude structures of Bloom’s taxonomy.

B
Bookmark A feature in your browser that lets you save shortcuts to your favourite
webpages.
C
Cognitive abilities Abilities that influence the acquisition and application of knowledge
in problem solving.
Cognitive domain Refers to knowledge structures of Bloom’s taxonomy.

D
Digital natives A person who was born during or after the general introduction of digital
technologies and through interacting with digital technology from an early age .
E
External learning Conditions relates with the stimuli that is presented externally to the
learner .
G
Gagne’s conditions of learning Aside from 'external learning', 'internal learning'
conditions must be met for the acquisition of each learned capability.
Goodness-of-fit A statistical model describes how well the model fits a set of
observations .
I
ICT-literacy Not limited to technical ability in using a computer; it includes other
intellectual competencies including solving problems and being critical, which a person
must possess in order to live comfortably in a knowledge-based society.
Infit mean square The consistency of fit of the participants and task-items. The
acceptable range of the mean squares statistics for each item in this study was taken to be
from 0.77 to 1.30.

Internal learning Associated with previously learned capabilities of the learner.


Item response theory Sometimes known as Latent Trait Theory. Based upon the
individual items of a test, not the accumulated score.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 205
L
Life-long learning The on-going, voluntary, and self-motivated pursuit of knowledge
for either personal or professional reasons, which are not confined to childhood or the
classroom but takes place throughout life and in a range of situations.
Logit The inverse of the sigmoidal 'logistic' function used in mathematics, especially
in statistics.
P
Partial credit model The partial credit model was devised for multiple-choice questions
in which credit is given for almost-correct distractors. Each test-item is modelled to have
its own response structure.
Population-based study A study that involved a defined general population.
Probability principle A person’s response to a particular task-item is never certain. It is
always influenced by human error. consideration of the odds that a person would give a
correct response to a task-item.
Psychomotor domain Refers to skills structures of Bloom’s taxonomy.

Q
Quest interactive test analyses system A data analysis software application, which
offers a comprehensive test and questionnaire analysis environment, by providing a data
analyst with access to the most recent developments in Rasch measurement theory, as
well as a range of traditional analysis procedures.
R
Rasch model Based on probability principle. A person’s response to a particular test-
item is never certain. It is always influenced by human error. Probabilities are introduced
through consideration of the odds that a person would give a correct response to an test-
item.
Rating scale model A rating scale model is one in which all items share the same
rating scale structure.
S
Self-efficacy The measure of one's own ability to complete tasks and reach goals.
Smart School A learning institution that has been systemically reinvented in terms of
teaching-learning practices and school management in order to prepare children for the
Information Age.
Standard error of measurement Estimates how repeated measures of a person on
the same instrument tend to be distributed around his or her 'true' score. The true score
is always an unknown because no measure can be constructed that provides a perfect
reflection of the true score.
T
Test instrument specification matrix A tool used to ensure that test-items were
organised in a continuum from the lowest ability to the most advanced, based on Gagne’s
learned capabilities.
Trans-border data transfer The transfer of data containing personal or sensitive
information from an entity in one country to an entity in another country.

Investigating ICT-literacy assessment tool:


Page 206 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Appendix A

Standards, Performance Indicators, and Outcomes (ARCL)

Standard One

The information literate student determines the nature and extent of the information needed.

Performance Indicators:

1. The information literate student defines and articulates the need for information.

Outcomes Include:

• Confers with instructors and participates in class discussions, peer workgroups, and electronic
discussions to identify a research topic, or other information need
• Develops a thesis statement and formulates questions based on the information need
• Explores general information sources to increase familiarity with the topic
• Defines or modifies the information need to achieve a manageable focus
• Identifies key concepts and terms that describe the information need
• Recognizes that existing information can be combined with original thought, experimentation,
and/or analysis to produce new information

2. The information literate student identifies a variety of types and formats of potential sources for
information.

Outcomes Include:

• Knows how information is formally and informally produced, organized, and disseminated
• Recognizes that knowledge can be organized into disciplines that influence the way information
is accessed
• Identifies the value and differences of potential resources in a variety of formats (e.g.,
multimedia, database, website, data set, audio/visual, book)
• Identifies the purpose and audience of potential resources (e.g., popular vs. scholarly, current vs.
historical)
• Differentiates between primary and secondary sources, recognizing how their use and
importance vary with each discipline
• Realizes that information may need to be constructed with raw data from primary sources

3. The information literate student considers the costs and benefits of acquiring the needed information.

Outcomes Include:

• Determines the availability of needed information and makes decisions on broadening the
information seeking process beyond local resources (e.g., interlibrary loan; using resources at
other locations; obtaining images, videos, text, or sound)
• Considers the feasibility of acquiring a new language or skill (e.g., foreign or discipline-based) in
order to gather needed information and to understand its context
• Defines a realistic overall plan and timeline to acquire the needed information

4. The information literate student re-evaluates the nature and extent of the information need.

Outcomes Include:

• Reviews the initial information need to clarify, revise, or refine the question
• Describes criteria used to make information decisions and choices

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 207
Standard Two

The information literate student accesses needed information effectively and efficiently.

Performance Indicators:

1. The information literate student selects the most appropriate investigative methods or information
retrieval systems for accessing the needed information.

Outcomes Include:

• Identifies appropriate investigative methods (e.g., laboratory experiment, simulation, fieldwork)


• Investigates benefits and applicability of various investigative methods
• Investigates the scope, content, and organization of information retrieval systems
• Selects efficient and effective approaches for accessing the information needed from the
investigative method or information retrieval system

2. The information literate student constructs and implements effectively-designed search strategies.

Outcomes Include:

• Develops a research plan appropriate to the investigative method


• Identifies keywords, synonyms and related terms for the information needed
• Selects controlled vocabulary specific to the discipline or information retrieval source
• Constructs a search strategy using appropriate commands for the information retrieval system
selected (e.g., Boolean operators, truncation, and proximity for search engines; internal
organizers such as indexes for books)
• Implements the search strategy in various information retrieval systems using different user
interfaces and search engines, with different command languages, protocols, and search
parameters
• Implements the search using investigative protocols appropriate to the discipline

3. The information literate student retrieves information online or in person using a variety of methods.

Outcomes Include:

• Uses various search systems to retrieve information in a variety of formats


• Uses various classification schemes and other systems (e.g., call number systems or indexes) to
locate information resources within the library or to identify specific sites for physical
exploration
• Uses specialized online or in person services available at the institution to retrieve information
needed (e.g., interlibrary loan/document delivery, professional associations, institutional research
offices, community resources, experts and practitioners)
• Uses surveys, letters, interviews, and other forms of inquiry to retrieve primary information

4. The information literate student refines the search strategy if necessary.

Outcomes Include:

• Assesses the quantity, quality, and relevance of the search results to determine whether
alternative information retrieval systems or investigative methods should be utilized
• Identifies gaps in the information retrieved and determines if the search strategy should be
revised
• Repeats the search using the revised strategy as necessary

Investigating ICT-literacy assessment tool:


Page 208 Developing and validating a new assessment instrument for trainee teachers in Malaysia
5. The information literate student extracts, records, and manages the information and its sources.

Outcomes Include:

• Selects among various technologies the most appropriate one for the task of extracting the
needed information (e.g., copy/paste software functions, photocopier, scanner, audio/visual
equipment, or exploratory instruments)
• Creates a system for organizing the information
• Differentiates between the types of sources cited and understands the elements and correct
syntax of a citation for a wide range of resources
• Records all pertinent citation information for future reference
• Uses various technologies to manage the information selected and organized

Standard Three

The information literate student evaluates information and its sources critically and incorporates selected
information into his or her knowledge base and value system.

Performance Indicators:

1. The information literate student summarizes the main ideas to be extracted from the information
gathered.

Outcomes Include:

• Reads the text and selects main ideas


• Restates textual concepts in his/her own words and selects data accurately
• Identifies verbatim material that can be then appropriately quoted

2. The information literate student articulates and applies initial criteria for evaluating both the
information and its sources.

Outcomes Include:

• Examines and compares information from various sources in order to evaluate reliability,
validity, accuracy, authority, timeliness, and point of view or bias
• Analyzes the structure and logic of supporting arguments or methods
• Recognizes prejudice, deception, or manipulation
• Recognizes the cultural, physical, or other context within which the information was created and
understands the impact of context on interpreting the information

3. The information literate student synthesizes main ideas to construct new concepts.

Outcomes Include:

• Recognizes interrelationships among concepts and combines them into potentially useful primary
statements with supporting evidence
• Extends initial synthesis, when possible, at a higher level of abstraction to construct new
hypotheses that may require additional information
• Utilizes computer and other technologies (e.g. spreadsheets, databases, multimedia, and audio or
visual equipment) for studying the interaction of ideas and other phenomena

4. The information literate student compares new knowledge with prior knowledge to determine the
value added, contradictions, or other unique characteristics of the information.

Outcomes Include:

• Determines whether information satisfies the research or other information need

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 209
• Uses consciously selected criteria to determine whether the information contradicts or verifies
information used from other sources
• Draws conclusions based upon information gathered
• Tests theories with discipline-appropriate techniques (e.g., simulators, experiments)
• Determines probable accuracy by questioning the source of the data, the limitations of the
information gathering tools or strategies, and the reasonableness of the conclusions
• Integrates new information with previous information or knowledge
• Selects information that provides evidence for the topic

5. The information literate student determines whether the new knowledge has an impact on the
individual’s value system and takes steps to reconcile differences.

Outcomes Include:

• Investigates differing viewpoints encountered in the literature


• Determines whether to incorporate or reject viewpoints encountered

6. The information literate student validates understanding and interpretation of the information through
discourse with other individuals, subject-area experts, and/or practitioners.

Outcomes Include:

• Participates in classroom and other discussions


• Participates in class-sponsored electronic communication forums designed to encourage
discourse on the topic (e.g., email, bulletin boards, chat rooms)
• Seeks expert opinion through a variety of mechanisms (e.g., interviews, email, listservs)

7. The information literate student determines whether the initial query should be revised.

Outcomes Include:

• Determines if original information need has been satisfied or if additional information is needed
• Reviews search strategy and incorporate additional concepts as necessary
• Reviews information retrieval sources used and expands to include others as needed

Standard Four

The information literate student, individually or as a member of a group, uses information effectively to
accomplish a specific purpose.

Performance Indicators:

1. The information literate student applies new and prior information to the planning and creation of a
particular product or performance.

Outcomes Include:

• Organizes the content in a manner that supports the purposes and format of the product or
performance (e.g. outlines, drafts, storyboards)
• Articulates knowledge and skills transferred from prior experiences to planning and creating the
product or performance
• Integrates the new and prior information, including quotations and paraphrasings, in a manner
that supports the purposes of the product or performance
• Manipulates digital text, images, and data, as needed, transferring them from their original
locations and formats to a new context

Investigating ICT-literacy assessment tool:


Page 210 Developing and validating a new assessment instrument for trainee teachers in Malaysia
The information literate student revises the development process for the product or performance.

Outcomes Include:

• Maintains a journal or log of activities related to the information seeking, evaluating, and
communicating process
• Reflects on past successes, failures, and alternative strategies

2. The information literate student communicates the product or performance effectively to others.

Outcomes Include:

• Chooses a communication medium and format that best supports the purposes of the product or
performance and the intended audience
• Uses a range of information technology applications in creating the product or performance
• Incorporates principles of design and communication
• Communicates clearly and with a style that supports the purposes of the intended audience

Standard Five

The information literate student understands many of the economic, legal, and social issues surrounding
the use of information and accesses and uses information ethically and legally.

Performance Indicators:

1. The information literate student understands many of the ethical, legal and socio-economic issues
surrounding information and information technology.

Outcomes Include:

• Identifies and discusses issues related to privacy and security in both the print and electronic
environments
• Identifies and discusses issues related to free vs. fee-based access to information
• Identifies and discusses issues related to censorship and freedom of speech
• Demonstrates an understanding of intellectual property, copyright, and fair use of copyrighted
material

2. The information literate student follows laws, regulations, institutional policies, and etiquette related
to the access and use of information resources.

Outcomes Include:

• Participates in electronic discussions following accepted practices (e.g. "Netiquette")


• Uses approved passwords and other forms of ID for access to information resources
• Complies with institutional policies on access to information resources
• Preserves the integrity of information resources, equipment, systems and facilities
• Legally obtains, stores, and disseminates text, data, images, or sounds
• Demonstrates an understanding of what constitutes plagiarism and does not represent work
attributable to others as his/her own
• Demonstrates an understanding of institutional policies related to human subjects research

3. The information literate student acknowledges the use of information sources in communicating the
product or performance.

Outcomes Include:

• Selects an appropriate documentation style and uses it consistently to cite sources


• Posts permission granted notices, as needed, for copyrighted material

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 211
Appendix B

Example of a plain language statement for this study, given to the PoE members.

Investigating ICT-literacy assessment tool:


Page 212 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Investigating ICT-literacy assessment tool:
Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 213
Appendix C
Preliminary tasks for the TBA tool given to the PoE members to be validated.

-Task 1-

Background:
You are a science teacher at one of the secondary school in Malaysia. As a teacher who teaches at a Smart
School, you would like to incorporate the use of ICT tools in your teaching and learning resources. You
have a special interest about ICT and you always try to find new discussion forum site that discusses
everything about ICT. Recently, you found out about a new site that specialised in discussions about ICT-
literacy.

Props:
- computer

To do:
1. Go to the forum site at http://ictliteracy.forumotion.com/
2. Register yourself as a new member of the forum. Use the research ID given to you as your
Username, and use your university email address to register.
3. One of the threads in the discussion forum is discussing about benefits of online discussion
forums to teachers. Give your thoughts about the topic.

-Task 2-

Background:
You are helping a friend who is asking your help in editing his document. Use the documentEdit file
given to you to carry out this task.

Props:
- computer
- USB drive with documentEdit file

To do:
1. Using the documentEdit file:
i. set the margin of the document to:
• Top : 3 cm
• Bottom : 3 cm
• Left : 2.5 cm
• Right : 2.5 cm
ii. insert the page number at the bottom, center of the document.
2. Put the header and footer of the document as:
• Header : Panduan Latihan Industri
• Footer : © 2010 Hakmilik terpelihara

3. Create a table of content (TOC) for the document on the first page (initial structure of the TOC
has been created for you).
4. Save your work in the USB drive.

-Task 3-

Background:
It is the end of the term and as a class teacher, you are expected to prepare a report on your student’s
grade. Use the studentGrade file given to you to carry out this task.

Props:
- computer
- USB drive with studentGrade file

Investigating ICT-literacy assessment tool:


Page 214 Developing and validating a new assessment instrument for trainee teachers in Malaysia
To do:
1. Using the studentGrade file, calculate the total marks and percentage attain by all students (note:
if you are unable to use MS Excel formula for your calculations, you are allowed to use whatever
methods/tools that you are comfortable with).
2. Rank the student based on their percentage.
3. Prepare a graph that would show total of students achieving Poor, Below average, Average,
Above average or Excellent results.
4. Save your work in the USB drive.

-Task 4(a)-

Background:
For the next science class, you are going to start with a new topic, which is “photosynthesis”. You are
going to prepare the resources for this topic, to be use as teaching and learning tools.

Props:
- small plant
- a small cup of water
- digital camera
- USB cable
- scanner
- computer
- photosynthesis diagram

To do:

1. You will need to take a picture of the plant.


2. Shoot a short video of you watering the plant with the given water.
3. You will also need to scan the photosynthesis diagram given.
4. Create a new folder named “photosynthesis” in the USB drive, and put the picture, video and
diagram file inside it.

-Task 4(b)-

Background:
Based on the resources from task 4(a), you are going to resize the picture that you have taken. If you did
not manage to take the picture before, use the picture provided in your USB drive (see folder resources).

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

1. Name the computer application that you are going to use to resize the picture (type-in your
answer in form-A file).
2. Resize the picture to size 400x300 pixels.
3. Save the newly resized picture in the photosynthesis folder.

-Task 4(c)-

Background:
To prepare the teaching and learning tools, you need to find credible information about photosynthesis
from the Internet.

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 215
1. Do you know how to evaluate credible information from the Internet?
2. Give your opinion about the criteria that you should look for to ensure the credibility of
information that you obtain from the Internet (type-in your answer in form-A file).
3. Use the Internet to find suitable information on:
o Products of photosynthesis
o Role of photosynthesis in maintaining a balanced ecosystem
4. Copy and paste the URL addresses of the webpage that you are going to use to form-A file.
5. Bookmark all the webpage that you use to obtain the information.

-Task 4(d)-

Background:
Using all the materials that you already have from task 4(a), 4(b) and 4(c), prepare a suitable teaching and
learning aids that you could use for your class.

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

1. Name a suitable computer application that you are going to use, to create this teaching and
learning aid (type-in your answer in form-A file).
2. Create a teaching and learning aid that incorporates the information from the Internet, the resized
picture, the video and the scanned document. If you did not manage to procure any of these four
resources, feel free to use the resources from the resource folder.
3. Also in the teaching and learning aid, include two simple interactive questions at the end of the
material, which you could use to ask your students.
4. Properly cite the source of your Internet information in your teaching and learning aid.
5. Save your file in the photosynthesis folder.

-Task 5-

Background:
The school is going to send two students to participate in the Annual State Mathematics Championship
this year. Each class is required to propose two of their best students to participate in the school level try-
out.

Props:
- computer
- USB drive with classDatabase file
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:
1. Using the classDatabase file, add a new student details:
• Registration no : 90-0016
• Student name : Nur Sakinah binti Murshid
• Fathers’ name : Murshid bin Mohd Yunos
• Address : Lot 210 Jalan Serama 6, Ulu Ayer Molek, 81100 J Bahru
• Distance from school : 0.9 km
• Contact no : 07-2522348
2. Also add her grades as follows:
• Mathematics : A
• Science : A
• Language : B
• Curricular activities : Girls’ Guide Jamboree

3. Now, using the current classDatabase file, use the query function to identify two of your
students who:
• Got A in Mathematics this term, AND

Investigating ICT-literacy assessment tool:


Page 216 Developing and validating a new assessment instrument for trainee teachers in Malaysia
• Live near to the school, AND
• Have not had the chance to participate in the championship before.

4. Type in the two names in the spaces provided in form-A file. If you did not manage to use the
query function (in question 3), you can identify the two names manually.
5. Save your work in the USB drive.

6. Email your answer in form-A file to jessnorelmy.matjizat@rmit.edu.au and also send a copy of
the email to jesselmy@gmail.com

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 217
Appendix D

The test-item evaluation form used for the draft TBA instrument.

Test-item evaluation

-Task 1-
Test-item no No [0] Yes [1]
1 Register new account
2 Reply to the correct thread
3 Post a reply

-Task 2-
Test-item no No [0] Yes [1]
4 Set margin correctly
5 Set page number correctly
6 Set document header and footer correctly
7 Use MS Word features to create TOC
8 Create TOC manually

-Task 3-
Test-item no No [0] Yes [1]
9 Correct use of basic spreadsheet formula
10 Correct use of advanced spreadsheet formula
11 Correct way of preparing a graph

-Task 4(a)-
Test-item no No [0] Yes [1]
12 Take picture
13 Shoot video
14 Use scanner
15 Manage file

-Task 4(b)-
Test-item no No [0] Yes [1]
16 Name acceptable picture editing application
17 Picture resized correctly

Investigating ICT-literacy assessment tool:


Page 218 Developing and validating a new assessment instrument for trainee teachers in Malaysia
-Task 4(c)-
Test-item no No [0] Yes [1]
18 Know how to evaluate credible website
19 Listed acceptable criteria for credible website
20 Use natural language search
21 Use Boolean search
22 Chooses credible websites (reflect and judge info)
23 Internet navigation – Bookmark

-Task 4(d)-
Test-item no No [0] Yes [1]
24 Naming suitable application for presentation
25 Basic use (text, background, insert new slide, slide design, transition)
26 Insert photo
27 Insert video
28 Insert scanned document
29 Advanced used (hyperlink, insert media, action button)
30 Proper citation
31 Manage file

-Task 5-
Test-item no No [0] Yes [1]
32 Add new database information (Basic)
33 Email – attachment
34 Email – use Carbon Copy

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 219
Appendix E
The test-item evaluation form used for the draft TBA instrument during pilot test-1, which
included the partial credit format.

Test-item evaluation

-Task 1-
Test-item no Score
1 Using online forum:
• Unable to complete [0]
• Register new account [1]
• Post a reply [2]
• Reply to the correct thread [3]

-Task 2-
Test-item no No [0] Yes [1]
2 Set margin correctly
3 Set page number correctly
4 Set document header and footer correctly

Score
5 Create TOC:
• Unable to complete [0]
• Create TOC manually [1]
• Use MS Word features to create TOC [2]

-Task 3-
Test-item no Score
6 Using MS Excel formula:
• Unable to complete [0]
• Use basic spreadsheet formula [1]
• Use advanced spreadsheet formula [2]
No [0] Yes [1]
7 Correct way of preparing a graph

-Task 4(a)-
Test-item no Score
8 Using ICT tools (still picture, video & scanner)
• Unable to complete [0]
• Able to use only one ICT tool [1]

Investigating ICT-literacy assessment tool:


Page 220 Developing and validating a new assessment instrument for trainee teachers in Malaysia
• Able to use two ICT tools [2]
• Able to use all tools [3]

-Task 4(b)-
Test-item no No [0] Yes [1]
9 Named acceptable picture editing application
10 Picture resized correctly

-Task 4(c)-
Test-item no No [0] Yes [1]
11 Know how to evaluate credible website
12 Listed acceptable criteria for credible website
Score
13 Internet searching:
• Unable to complete [0]
• Use natural language search [1]
• Use Boolean search [2]

Test-item no No [0] Yes [1]


14 Chooses credible websites (reflect and judge info)
15 Internet navigation – Bookmark

-Task 4(d)-
Test-item no Score
16 Use presentation app. to create T&L resource:
• Unable to complete [0]
• Use basic features only (text, background,
insert new slide, slide design, transition) [1]
• Include advanced features (hyperlink, insert
media, action button) [2]

17 Insert media:
• Unable to complete [0]
• 1 media [1]
• 2 media [2]
• 3 media [3]

Test-item no No [0] Yes [1]


18 Proper citation
19 Manage file

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 221
-Task 5-
Test-item no No [0] Yes [1]
20 Add new database information
Score
21 Using email:
• Unable to complete [0]
• Email – attachment / CC [1]
• Email – attachment & CC [2]

Investigating ICT-literacy assessment tool:


Page 222 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Appendix F
Revised tasks of the TBA instrument to be validated during pilot test-2: round 1.

-Task 1-

Background:
You are helping a friend who is asking your help in editing his document. Use the documentEdit file
given to you to carry out this task.

Props:
- computer
- USB drive with documentEdit file

To do:
1. Using the documentEdit file:
i. set the margin of the document to:
• Top : 1 inch
• Bottom : 1 inch
• Left : 0.8 inch
• Right : 1 inch
ii. insert the page number at the bottom, center of the document.
2. Put the header and footer of the document as:
• Header : Panduan Latihan Industri
• Footer : © 2010 Hakmilik terpelihara

3. Save your work in the USB drive.

-Task 2-

Background:
It is the end of the term and as a class teacher, you are expected to prepare a report on your student’s
grade. Use the studentGrade file given to you to carry out this task.

Props:
- computer
- USB drive with studentGrade file

To do:
1. Using the studentGrade file, calculate the total marks and percentage attain by all students (note:
if you are unable to use MS Excel formula for your calculations, you are allowed to use whatever
methods/tools that you are comfortable with).
2. Prepare a graph that would show total of students achieving Poor, Below average, Average,
Above average or Excellent results.
3. Save your work in the USB drive.

-Task 3(a)-

Background:
For the next science class, you are going to start with a new topic, which is “photosynthesis”. You are
going to prepare the resources for this topic, to be use as teaching and learning tools.

Props:
- small plant
- a small cup of water
- digital camera
- USB cable
- scanner

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 223
- computer
- photosynthesis diagram

To do:

1. You will need to take a picture of the plant.


2. Shoot a short video of you watering the plant with the given water.
3. You will also need to scan the photosynthesis diagram given.
4. Create a new folder named “photosynthesis” in the USB drive, and put the picture, video and
diagram file inside it.

-Task 3(b)-

Background:
Based on the resources from task 4(a), you are going to resize the picture that you have taken. If you did
not manage to take the picture before, use the picture provided in your USB drive (see folder resources).

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

1. Name the computer application that you are going to use to resize the picture (type-in your
answer in form-A file).
2. Resize the picture to size 400x300 pixels or 40%x30% off the original size.
3. Save the newly resized picture in the photosynthesis folder.

-Task 3(c)-

Background:
To prepare the teaching and learning tools, you need to find credible information about photosynthesis
from the Internet.

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

1. Do you know how to evaluate credible information from the Internet?


2. Give your opinion about the criteria that you should look for to ensure the credibility of
information that you obtain from the Internet (type-in your answer in form-A file).
3. Use the Internet to find suitable information on:
o Products of photosynthesis
o Role of photosynthesis in maintaining a balanced ecosystem
4. Copy and paste the URL addresses of the webpage that you are going to use to form-A file.
5. Bookmark all the webpage that you use to obtain the information.

-Task 3(d)-

Background:
Using all the materials that you already have from task 4(a), 4(b) and 4(c), prepare a suitable teaching and
learning aids that you could use for your class.

Props:
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:

Investigating ICT-literacy assessment tool:


Page 224 Developing and validating a new assessment instrument for trainee teachers in Malaysia
1. Name a suitable computer application that you are going to use, to create this teaching and
learning aid (type-in your answer in form-A file).
2. Create a teaching and learning aid that incorporates the information from the Internet, the resized
picture, the video and the scanned document. If you did not manage to procure any of these four
resources, feel free to use the resources from the resource folder.
3. Also in the teaching and learning aid, include two simple interactive questions at the end of the
material, which you could use to ask your students.

4. Properly cite the source of your Internet information in your teaching and learning aid.
5. Save your file in the photosynthesis folder.

-Task 4-

Background:
You also have a special interest about ICT and you always try to find new discussion forum site that
discusses everything about ICT. Recently, you found out about a new discussion forum site that
specialised in discussions about ICT-literacy.

Props:
- computer

To do:
1. Go to the discussion forum site at http://ictliteracy.forumotion.net/
2. Register yourself as a new member of the discussion forum. Use the given research ID as your
Username, and use your university email address to register.
3. One of the threads in the forum is discussing about benefits of discussion forum site to teachers.
Give your thoughts about the topic.

-Task 5-

Background:
The school is going to send two students to participate in the Annual State Mathematics Championship
this year. Each class is required to propose two of their best students to participate in the school level try-
out.

Props:
- computer
- USB drive with classDatabase file
- USB drive with form-A file (open this file in order for you to answer the following questions).

To do:
1. Using the classDatabase file, add a new student details:
• Registration no : 90-0016
• Student name : Nur Sakinah binti Murshid
• Fathers’ name : Murshid bin Mohd Yunos
• Address : Lot 210 Jalan Serama 6, Ulu Ayer Molek, 81100 J Bahru
• Distance from school : 0.9 km
• Contact no : 07-2522348
2. Also add her grades as follows:
• Mathematics : A
• Science : A
• Language : B
• Curricular activities : Girls’ Guide Jamboree

3. Now, using the current classDatabase file, use the query function to identify two of your
students who:
• Got A in Mathematics this term, AND
• Live near to the school, AND

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 225
• Have not had the chance to participate in the championship before.
4. Type in the two names in the spaces provided in form-A file. If you did not manage to use the
query function (in question 3), you can identify the two names manually.
5. Save your work in the USB drive.

6. Email your answer in form-A file to jessnorelmy.matjizat@rmit.edu.au and also send a copy of
the email to jesselmy@gmail.com

Investigating ICT-literacy assessment tool:


Page 226 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Appendix G
Finalised tasks of the ICT-literacy TBA instrument.

- Task 1-

Background:
You are a high school teacher in one of the schools in Malaysia. As a teacher who teaches in a Smart
School, you wanted to incorporate the use of ICT in your teaching and learning activity.

Your friend asked for your help to edit a document. Use the documentEdit file provided to solve this task.

Props:
- computer
- eksperimenICT folder which contains the documentEdit file

Steps:
1. Create a new folder in the eksperimenICT folder. Use your research ID number as the name of
this new folder.
2. Save all your work in this folder.
3. Open your documentEdit file. Using this file, you need to:
i. Set the document margin to:
• Top : 3 cm
• Bottom : 3 cm
• Left : 2.5 cm
• Right : 2.5 cm
ii. Insert page number at the bottom, middle of the document.
4. Insert the header and footer of the document as:
• Header : Panduan Latihan Industri
• Footer : © 2010 Hakmilik terpelihara
5. Save your work in the new folder that you had just created.

- Task 2-

Background:
As the class teacher, you need to prepare your students’ grade report for each term. Use the studentGrade
file provided to complete this task.

Props:
- computer
- eksperimenICT folder which contains the studentGrade file

Steps:
1. Using the studentGrade file, count the total marks and percentage for each student (note: if you
are unable to use the spreadsheet functions to count the total marks and percentage, you are
allowed to use other methods that are suitable).
2. Prepare a graph that showed total students who achieve Poor, Below average, Average, Above
average and Excellent results.
3. Save your work in the new folder that you had just created.

- Task 3 (a)-

Background:
For your next science class, you will start with a new topic that is photosynthesis. For this topic, you plan
to prepare suitable resources that could be used as your teaching and learning aids.

Props:
- potted plant

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 227
- a cup of water
- digital camera
- USB cable
- scanner
- computer
- a diagram of the photosynthesis process

Steps:

1. Using suitable technological tools, you need to:


o take a picture of the potted plant.
o record a short video of you watering the potted plant.
o scan the diagram of the photosynthesis process.
2. Save the picture, video and the scanned document in the new folder that you had just created.

- Task 3(b)-

Background:
Using the saved items in Task 3, you need to resize the picture of the potted plant (note: if you were
unable to take a picture of the potted plant, you can use the picture provided in the USB drive labelled
“media”, situated at the front of the lab).

Props:
- computer
- picture of the potted plant
- eksperimenICT folder which contains the form-A file.

Steps:

1. Name one computer application that you know that can be used for picture editing (type in you
answer in the form-A file).
2. Resize the potted plant picture into 400x300 pixel or 40%x30% from its original size.
3. Save the resized picture into the new folder that you had created and name the file as resize.

- Task 3(c)-

Background:
To support your resources for your teaching and learning aids, you need reliable and valid information
about photosynthesis from trusted sources in the Internet.

Props:
- computer
- eksperimenICT folder which contains the form-A file.

Steps:

1. Use the Internet to find information regarding:


• Products of photosynthesis
2. Copy and paste the URL address of the website that you find suitable to be use, into the form-A
file.
3. Place bookmark/favourite to the website that you chose.

Investigating ICT-literacy assessment tool:


Page 228 Developing and validating a new assessment instrument for trainee teachers in Malaysia
- Task 3(d)-

Background:
Using the items from Task 3, 4 and 5, prepare a suitable teaching and learning aid for your class.

Props:
- computer

Steps:

1. Make sure that you have all the items from Task 3, Task 4 and Task 5 (note: if you are unable to
obtain any of the items, you can use the items provided in the USB drive labelled “media”,
situated at the front of the lab).
2. Using a suitable computer application, create a suitable teaching and learning aid for your class.
The sub-topic for your teaching and learning aids are:
• Definition of photosynthesis
• Picture of a potted plant
• Video of watering a plant
• Diagram of a photosynthesis process
• Products of photosynthesis
• References
3. Make sure that you correctly listed all the references that you used.
4. Save your work in the new folder that you had just created.

-Task 4(a)-

Background:
You are really interested with the concept of using ICT in teaching and learning, and find that an Internet
discussion forum is another interesting mode of learning and discussing about ICT. Recently, you found a
new Internet discussion forum which focuses on the topic ICT-literacy.

Props:
- computer

Steps:
1. Go to the Internet forum at http://ictliteracy.forumotion.net/
2. Register yourself as a new discussion forum member. Register using your research ID number
as your username and use your university email address to complete the registration.
3. Make sure that you carefully read all the instructions during registration. Activate your account
after you finish your registration.
4. One of the discussion forum threads discusses on the topic of ICT and teachers. Give your
view/opinion on that topic.

-Task 4(b)-

Background:
As a teacher, you also need to be smart in evaluating and identifying the characteristics and contents of a
reliable websites. This is important as to avoid yourself from giving the wrong, unreliable or outdated
information in your teaching and learning aids.

Props:
- computer
- eksperimenICT folder which contains the form-A file

Steps:

1. Go to the websites below:

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 229
• http://e-pembelajaran.blogspot.com/

• http://www.umich.edu/~gs265/society/waterpollution.htm

• http://en.wikipedia.org/wiki/Pollution

• http://www.girl.com.au/chocfullofacts.htm

2. Evaluate each website and identify whether the website are reliable or questionable. Provide your
reason for why you think that the website is reliable or questionable. Type in your answer in the
form-A file.

- Task 5-

Background:
As the class teacher, one of your responsibilities is to register new students’ information into the school
database.

Props:
- computer
- eksperimenICT folder which contains the classDatabase file

Steps:
1. Using the classDatabase file, add in this new students’ information:
• Registration no : 90-0016
• Student name : Sakinah binti Murshid
• Fathers’ name : Murshid bin Yunos
• Address : Lot 210 Jalan Serama 6, Ulu Ayer Molek, 81100 J Bahru
• Distance from school : 0.9 km
• Contact no : 07-2522348
2. Insert her latest test grade and curricular activities as below:
• Mathematics : A
• Science : A
• Language : B
• Curricular activities : Girls’ Guide Jamboree
3. Save your work.

-Task 6-

1. Email the form-A file to jessnor.matjizat@student.rmit.edu.au and send a carbon copy of the
email to jesselmy@gmail.com. Put your research ID number as the subject of that email.

Investigating ICT-literacy assessment tool:


Page 230 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Appendix H
The finalised test-item evaluation form for the TBA instrument.

Test-item evaluation

-Task 1-
Test-item no No [0] Yes [1]
1 Manage file
2 Set margin correctly
3 Set page number correctly
4 Set document header and footer correctly

-Task 2-
Test-item no Score
5 Using spreadsheet formula:
• Use other calculating method [0]
• Use basic spreadsheet formula [1]

No [0] Yes [1]


6 Correct way of preparing a graph

-Task 3(a)-
Test-item no No [0] Yes [1]
7 Named acceptable picture editing application
8 Picture resized correctly
9 Manage file

-Task 3(b)-
Score
10 Internet searching:
• Unable to complete [0]
• Use natural language search [1]
• Use Boolean search [2]

Test-item no No [0] Yes [1]


11 Chooses credible websites (reflect and judge info)
12 Internet navigation – Bookmark

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 231
-Task 3(c)-
Test-item no Score
13 Use presentation application to create T&L resource:
• Unable to complete [0]
• Use other type of computer application [1]
• Use presentation-type computer application [2]
14 Insert media:
• Unable to complete [0]
• 1 media [1]
• 2 media [2]
• 3 media [3]
Test-item no No [0] Yes [1]
15 Proper citation

-Task 4(a)-
Test-item no Score
16 Using online forum:
• Unable to complete [0]
• Register new account [1]
• Post a reply [2]
• Reply to the correct thread [3]
-Task 4(b)-
17 Know how to evaluate credible website
• Unable to complete [0]
• 1 status correct [1]
• 2 status correct [2]
• 3 status correct [3]
• 4 status correct [4]

-Task 5 & 6-
Test-item no Score
18 Using MS Access features:
• Unable to complete [0]
• Manage to add parts of the new record [1]
• Manage to add all data for the new record [2]
19 Using email:
• Unable to complete [0]
• Email – attachment / CC [1]
• Email – attachment & CC [2]

Investigating ICT-literacy assessment tool:


Page 232 Developing and validating a new assessment instrument for trainee teachers in Malaysia
Appendix I(1)

Print-out of the Quest analysis for pilot test-1 data.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 233
allprerec
Title (The Validation and Reliability Testing Run 1)
set width = 110 !page
set logon >-allpre_log.txt
data_file <<rawdata.txt
codes "01x"
format code 1-5 gender 6 frequency 7 items 8-41

* 1 2 3
* 1234567890123456789012345678901234
key 1111111111111111111111111111111111 !score=1

item_names<<namelist.txt
*anchor !items <<pre_anc.txt
delete !items <<del_rec.txt
recode (01x)(010) !1
recode (01x)(010) !2
recode (01x)(010) !3
recode (01x)(010) !4
recode (01x)(010) !5
recode (01x)(010) !6
recode (01x)(010) !7
recode (01x)(010) !8
recode (01x)(010) !9
recode (01x)(010) !10
recode (01x)(010) !11
recode (01x)(010) !12
recode (01x)(010) !13
recode (01x)(010) !14
recode (01x)(010) !15
recode (01x)(010) !16
recode (01x)(010) !17
recode (01x)(010) !18
recode (01x)(010) !19
recode (01x)(010) !20
recode (01x)(010) !21
recode (01x)(010) !22
recode (01x)(010) !23
recode (01x)(010) !24
recode (01x)(010) !25
recode (01x)(010) !26
recode (01x)(010) !27
recode (01x)(010) !28
recode (01x)(010) !29
recode (01x)(010) !30
recode (01x)(010) !31
recode (01x)(010) !32
recode (01x)(010) !33
recode (01x)(010) !34

estimate !iter=100
show settings >-allpre_set.txt
show !map=1 >-allpre_1map.txt
show !map=2 >-allpre_2map.txt
show !map=3 >-allpre_3map.txt
show !table=1 >-allpre_1tab.txt
show !table=2 >-allpre_2tab.txt
show !table=3 >-allpre_3tab.txt
show !table=4 >-allpre_4tab.txt
*show items !form=anchor >-pre_anc.txt
show cases !order=estimate >-allpre_cso.txt
show cases !order=fit >-allpre_csf.txt
Page 1
allprerec
show items !order=estimate >-allpre_ito.txt
show items !order=fit >-allpre_fit.txt
itanal >-allpre_out.txt
logit_table >-allpre_logit.txt
kidmap 1-16>-allpre_kid.txt
show items !stat=tau >-LM_statTau.txt

bye

Page 2
allpre_2map
(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------------------------------------
Item Fit 11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------------------------------------
INFIT
MNSQ .50 .56 .63 .71 .83 1.00 1.20 1.40 1.60 1.80 2.0
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
2 item 2 . | * .
4 item 4 * . | .
5 item 5 . *| .
6 item 6 . | * .
8 item 8 . * .
11 item 11 . * .
13 item 13 . | * .
14 item 14 . | * .
16 item 16 . * | .
17 item 17 . * | .
18 item 18 . | * .
19 item 19 . *| .
22 item 22 . * | .
23 item 23 . |* .
27 item 27 . * | .
28 item 28 . | * .
29 item 29 . | * .
30 item 30 . *| .
31 item 31 . * | .
32 item 32 . *| .
34 item 34 . | * .
==============================================================================================================

Page 1
allpre_1map
(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Estimates (Thresholds)
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
3.0 |
|
|
|
|
| 29 34
|
|
|
2.0 | 2 23
|
|
X |
| 22
|
X |
|
|
1.0 XXX | 19 30
|
|
| 32
|
XXXXXX | 4 11
|
|
X | 27
.0 |
|
XX |
|
X | 17 31
|
|
| 5
|
|
-1.0 |
| 6 16
|
X |
|
| 13 14
|
|
|
-2.0 |
|
|
| 8 18 28
|
|
|
|
|
-3.0 |
--------------------------------------------------------------------------------
------------------------------
Page 1
allpre_3tab
(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Estimates (Thresholds) In input Order
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
ITEM NAME |SCORE MAXSCR| THRSH | INFT OUTFT INFT OUTFT
| | 1 | MNSQ MNSQ t t

--------------------------------------------------------------------------------
------------------------------
1 item 1 | 0 0 | Item has perfect score
| | |
| | |
2 item 2 | 3 16 | 1.98 | 1.10 1.21 .4 .5
| | .66|
| | |
3 item 3 | 0 0 | Item has perfect score
| | |
| | |
4 item 4 | 8 16 | .44 | .70 .67 -2.1 -1.1
| | .53|
| | |
5 item 5 | 12 16 | -.73 | .98 1.15 .1 .5
| | .61|
| | |
6 item 6 | 13 16 | -1.12 | 1.23 1.86 .7 1.3
| | .67|
| | |
7 item 7 | 0 0 | Item has zero score
| | |
| | |
8 item 8 | 15 16 | -2.40 | 1.00 .56 .3 .0
| | 1.06|
| | |
9 item 9 | 0 0 | Item has perfect score
| | |
| | |
10 item 10 | 0 0 | Item has zero score
| | |
| | |
11 item 11 | 8 16 | .44 | .99 .96 .0 .0
| | .53|
| | |
12 item 12 | 0 0 | Item has perfect score
| | |
| | |
13 item 13 | 14 16 | -1.62 | 1.14 .98 .4 .2
| | .79|
| | |
14 item 14 | 14 16 | -1.62 | 1.14 .98 .4 .2
| | .79|
| | |
16 item 16 | 13 16 | -1.12 | .83 .70 -.3 -.4
| | .67|
| | |
17 item 17 | 11 16 | -.40 | .96 .94 -.1 .0
| | .57|
| | |
18 item 18 | 15 16 | -2.40 | 1.20 3.26 .5 1.6
| | 1.06|
| | |
19 item 19 | 6 16 | .98 | .98 .94 -.1 -.1
Page 1
allpre_3tab
| | .54|
| | |
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Estimates (Thresholds) In input Order
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
ITEM NAME |SCORE MAXSCR| THRSH | INFT OUTFT INFT OUTFT
| | 1 | MNSQ MNSQ t t

--------------------------------------------------------------------------------
------------------------------
20 item 20 | 0 0 | Item has perfect score
| | |
| | |
21 item 21 | 0 0 | Item has zero score
| | |
| | |
22 item 22 | 4 16 | 1.60 | .84 .71 -.5 -.5
| | .60|
| | |
23 item 23 | 3 16 | 1.98 | 1.01 1.05 .2 .3
| | .66|
| | |
24 item 24 | 0 0 | Item has perfect score
| | |
| | |
25 item 25 | 0 0 | Item has perfect score
| | |
| | |
26 item 26 | 0 0 | Item has perfect score
| | |
| | |
27 item 27 | 9 16 | .17 | .87 .83 -.8 -.4
| | .53|
| | |
28 item 28 | 15 16 | -2.40 | 1.14 1.29 .4 .6
| | 1.06|
| | |
29 item 29 | 2 16 | 2.46 | 1.11 1.37 .4 .7
| | .78|
| | |
30 item 30 | 6 16 | .98 | .98 .92 .0 -.1
| | .54|
| | |
31 item 31 | 11 16 | -.40 | .88 .80 -.4 -.4
| | .57|
| | |
32 item 32 | 7 16 | .71 | .98 .96 -.1 .0
| | .53|
| | |
33 item 33 | 0 0 | Item has perfect score
| | |
| | |
34 item 34 | 2 16 | 2.46 | 1.18 1.37 .5 .7
| | .78|
| | |
--------------------------------------------------------------------------------
Page 2
allpre_3tab
------------------------------
Mean | | .00 | 1.01 1.12 .0 .2
SD | | 1.60 | .14 .57 .6 .6
================================================================================
==============================

Page 3
allpre_1tab
(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Estimates (Thresholds)
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
Summary of item Estimates
=========================

Mean .00
SD 1.60
SD (adjusted) 1.43
Reliability of estimate .80

Fit Statistics
===============
Infit Mean Square Outfit Mean Square

Mean 1.01 Mean 1.12


SD .14 SD .57

Infit t Outfit t
Mean -.01 Mean .18
SD .61 SD .61
3 items with zero scores
9 items with perfect scores
================================================================================
==============================

Page 1
allpre_out
(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 1: item 1 Infit MNSQ = .00
Disc = .00
Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 2: item 2 Infit MNSQ = 1.10
Disc = .08
Categories 0 1* x missing
Count 13 3 0 0
Percent (%) 81.3 18.8 .0
Pt-Biserial -.08 .08 NA
p-value .388 .388 NA
Mean Ability .41 .56 NA NA
Step Labels 1
Thresholds 1.98
Error .66
................................................................................
..............................
................................................................................
..............................

Item 3: item 3 Infit MNSQ = .00


Disc = .00

Categories 0 1* x missing

Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA

Step Labels 1

Thresholds
Error
Page 1
allpre_out
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 4: item 4 Infit MNSQ = .70
Disc = .71

Categories 0 1* x missing
Count 8 8 0 0
Percent (%) 50.0 50.0 .0
Pt-Biserial -.69 .69 NA
p-value .002 .002 NA
Mean Ability -.08 .95 NA NA
Step Labels 1

Thresholds .44
Error .53
................................................................................
..............................
................................................................................
..............................
Item 5: item 5 Infit MNSQ = .98
Disc = .29
Categories 0 1* x missing
Count 4 12 0 0
Percent (%) 25.0 75.0 .0
Pt-Biserial -.28 .28 NA
p-value .148 .148 NA
Mean Ability .08 .56 NA NA
Step Labels 1
Thresholds -.73
Error .61
................................................................................
..............................
................................................................................
..............................
Item 6: item 6 Infit MNSQ = 1.23
Disc = -.08

Categories 0 1* x missing

Count 3 13 0 0
Percent (%) 18.8 81.3 .0
Pt-Biserial .08 -.08 NA
p-value .388 .388 NA
Page 2
allpre_out
Mean Ability .58 .40 NA NA
Step Labels 1
Thresholds -1.12
Error .67
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 7: item 7 Infit MNSQ = .00
Disc = .00

Categories 0 1* x missing
Count 16 0 0 0
Percent (%) 100.0 .0 .0
Pt-Biserial .00 NA NA
p-value .500 NA NA
Mean Ability .44 NA NA NA
Step Labels

Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 8: item 8 Infit MNSQ = 1.00
Disc = .32
Categories 0 1* x missing
Count 1 15 0 0
Percent (%) 6.3 93.8 .0
Pt-Biserial -.31 .31 NA
p-value .123 .123 NA
Mean Ability -.44 .49 NA NA
Step Labels 1
Thresholds -2.40
Error 1.06
................................................................................
..............................
................................................................................
..............................

Item 9: item 9 Infit MNSQ = .00


Disc = .00

Page 3
allpre_out
Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1

Thresholds
Error
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 10: item 10 Infit MNSQ = .00
Disc = .00

Categories 0 1* x missing
Count 16 0 0 0
Percent (%) 100.0 .0 .0
Pt-Biserial .00 NA NA
p-value .500 NA NA
Mean Ability .44 NA NA NA
Step Labels

Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 11: item 11 Infit MNSQ = .99
Disc = .34
Categories 0 1* x missing
Count 8 8 0 0
Percent (%) 50.0 50.0 .0
Pt-Biserial -.33 .33 NA
p-value .106 .106 NA
Mean Ability .19 .69 NA NA
Step Labels 1
Thresholds .44
Error .53
................................................................................
..............................
Page 4
allpre_out
................................................................................
..............................

Item 12: item 12 Infit MNSQ = .00


Disc = .00
Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA

Step Labels 1
Thresholds
Error
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 13: item 13 Infit MNSQ = 1.14
Disc = .15

Categories 0 1* x missing
Count 2 14 0 0
Percent (%) 12.5 87.5 .0
Pt-Biserial -.14 .14 NA
p-value .297 .297 NA
Mean Ability .15 .48 NA NA
Step Labels 1
Thresholds -1.62
Error .79
................................................................................
..............................
................................................................................
..............................
Item 14: item 14 Infit MNSQ = 1.14
Disc = .15
Categories 0 1* x missing
Count 2 14 0 0
Percent (%) 12.5 87.5 .0
Pt-Biserial -.14 .14 NA
p-value .297 .297 NA
Mean Ability .15 .48 NA NA

Page 5
allpre_out
Step Labels 1
Thresholds -1.62
Error .79
................................................................................
..............................
................................................................................
..............................

Item 16: item 16 Infit MNSQ = .83


Disc = .52
Categories 0 1* x missing

Count 3 13 0 0
Percent (%) 18.8 81.3 .0
Pt-Biserial -.51 .51 NA
p-value .022 .022 NA
Mean Ability -.36 .62 NA NA
Step Labels 1

Thresholds -1.12
Error .67
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)

--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 17: item 17 Infit MNSQ = .96
Disc = .38
Categories 0 1* x missing
Count 5 11 0 0
Percent (%) 31.3 68.8 .0
Pt-Biserial -.37 .37 NA
p-value .081 .081 NA
Mean Ability .03 .62 NA NA

Step Labels 1
Thresholds -.40
Error .57
................................................................................
..............................
................................................................................
..............................
Item 18: item 18 Infit MNSQ = 1.20
Disc = -.33
Categories 0 1* x missing

Page 6
allpre_out
Count 1 15 0 0
Percent (%) 6.3 93.8 .0
Pt-Biserial .32 -.32 NA
p-value .113 .113 NA
Mean Ability 1.37 .37 NA NA
Step Labels 1
Thresholds -2.40
Error 1.06
................................................................................
..............................
................................................................................
..............................
Item 19: item 19 Infit MNSQ = .98
Disc = .35
Categories 0 1* x missing
Count 10 6 0 0
Percent (%) 62.5 37.5 .0
Pt-Biserial -.33 .33 NA
p-value .103 .103 NA
Mean Ability .24 .76 NA NA
Step Labels 1
Thresholds .98
Error .54
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)

--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 20: item 20 Infit MNSQ = .00
Disc = .00

Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Page 7
allpre_out
Item 21: item 21 Infit MNSQ = .00
Disc = .00
Categories 0 1* x missing
Count 16 0 0 0
Percent (%) 100.0 .0 .0
Pt-Biserial .00 NA NA
p-value .500 NA NA
Mean Ability .44 NA NA NA
Step Labels

Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 22: item 22 Infit MNSQ = .84
Disc = .50
Categories 0 1* x missing
Count 12 4 0 0
Percent (%) 75.0 25.0 .0
Pt-Biserial -.48 .48 NA
p-value .029 .029 NA
Mean Ability .23 1.07 NA NA
Step Labels 1
Thresholds 1.60
Error .60
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)

--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................

Item 23: item 23 Infit MNSQ = 1.01


Disc = .21
Categories 0 1* x missing

Count 13 3 0 0
Percent (%) 81.3 18.8 .0
Pt-Biserial -.21 .21 NA
p-value .221 .221 NA
Mean Ability .36 .76 NA NA
Step Labels 1

Page 8
allpre_out
Thresholds 1.98
Error .66
................................................................................
..............................
................................................................................
..............................
Item 24: item 24 Infit MNSQ = .00
Disc = .00

Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 25: item 25 Infit MNSQ = .00
Disc = .00
Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 26: item 26 Infit MNSQ = .00
Disc = .00

Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Page 9
allpre_out
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 27: item 27 Infit MNSQ = .87
Disc = .51
Categories 0 1* x missing
Count 7 9 0 0
Percent (%) 43.8 56.3 .0
Pt-Biserial -.49 .49 NA
p-value .027 .027 NA
Mean Ability .02 .76 NA NA
Step Labels 1
Thresholds .17
Error .53
................................................................................
..............................
................................................................................
..............................
Item 28: item 28 Infit MNSQ = 1.14
Disc = -.01
Categories 0 1* x missing
Count 1 15 0 0
Percent (%) 6.3 93.8 .0
Pt-Biserial .01 -.01 NA
p-value .490 .490 NA
Mean Ability .44 .44 NA NA
Step Labels 1
Thresholds -2.40
Error 1.06
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 29: item 29 Infit MNSQ = 1.11
Page 10
allpre_out
Disc = .01
Categories 0 1* x missing
Count 14 2 0 0
Percent (%) 87.5 12.5 .0
Pt-Biserial -.01 .01 NA
p-value .486 .486 NA
Mean Ability .43 .46 NA NA

Step Labels 1
Thresholds 2.46
Error .78
................................................................................
..............................
................................................................................
..............................

Item 30: item 30 Infit MNSQ = .98


Disc = .35

Categories 0 1* x missing
Count 10 6 0 0
Percent (%) 62.5 37.5 .0
Pt-Biserial -.33 .33 NA
p-value .103 .103 NA
Mean Ability .24 .76 NA NA
Step Labels 1
Thresholds .98
Error .54
................................................................................
..............................
................................................................................
..............................
Item 31: item 31 Infit MNSQ = .88
Disc = .49
Categories 0 1* x missing
Count 5 11 0 0
Percent (%) 31.3 68.8 .0
Pt-Biserial -.48 .48 NA
p-value .031 .031 NA
Mean Ability -.09 .68 NA NA
Step Labels 1
Thresholds -.40
Error .57
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)

--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)

Page 11
allpre_out
--------------------------------------------------------------------------------
------------------------------
................................................................................
..............................
Item 32: item 32 Infit MNSQ = .98
Disc = .34
Categories 0 1* x missing

Count 9 7 0 0
Percent (%) 56.3 43.8 .0
Pt-Biserial -.33 .33 NA
p-value .106 .106 NA
Mean Ability .22 .72 NA NA
Step Labels 1
Thresholds .71
Error .53
................................................................................
..............................
................................................................................
..............................

Item 33: item 33 Infit MNSQ = .00


Disc = .00
Categories 0 1* x missing
Count 0 16 0 0
Percent (%) .0 100.0 .0
Pt-Biserial NA .00 NA
p-value NA .500 NA
Mean Ability NA .44 NA NA
Step Labels 1
Thresholds
Error
................................................................................
..............................
................................................................................
..............................
Item 34: item 34 Infit MNSQ = 1.18
Disc = -.07
Categories 0 1* x missing
Count 14 2 0 0
Percent (%) 87.5 12.5 .0
Pt-Biserial .07 -.07 NA
p-value .402 .402 NA
Mean Ability .46 .30 NA NA

Step Labels 1

Thresholds 2.46
Error .78
................................................................................
..............................
================================================================================
==============================
*****Output Continues****

(The Validation and Reliability Testing Run 1)

Page 12
allpre_out
--------------------------------------------------------------------------------
------------------------------
Item Analysis Results for Observed Responses
11/10/11 15:43
all on all (N = 16 L = 33 Probability Level= .50)
--------------------------------------------------------------------------------
------------------------------

Mean test score 11.94


Standard deviation 2.38
Internal Consistency .40
The individual item statistics are calculated
using all available data.
The overall mean, standard deviation and internal
consistency indices assume that missing responses
are incorrect. They should only be considered useful when
there is a limited amount of missing data.
================================================================================
==============================

Page 13
allpre_1map
Each X represents 1 students
================================================================================
==============================

Page 2
Appendix I(2)

Print-out of the Quest analysis for pilot test-1 re-test data.

Investigating ICT-literacy assessment tool:


Page 234 Developing and validating a new assessment instrument for trainee teachers in Malaysia
calV22ctl
Title Multiple-Choice Test Analysis: Calibrate Vr22
* 1 2 3 4 5 6 7
*234567890123456789012345678901234567890123456789012345678901234567890
*Command lines cannot exceed 70 characters
*If last character of line is - then next line is a
*continuation not a new line (360 characters maximum)
set logon >-calV22log.txt *creates a log file
set width=88 !page *puts 88 characters in each line of output
data_file <<dataVr2_2.txt
codes " 0123" *treats blank as an expected wrong response
format code 1-5 gender 6 frequency 7 items 8-28
* 1 2 [comment line]
* 123456789012345678901 [comment line]
key 111111111111111111111 ! score=1
key 2xxx22x2xxxx2xx22xxx2 ! score=2
key 3xxxxxx3xxxxxxxx3xxxx ! score=3
*key gives the correct answer for each item
delete !items <<calV2_del.txt
estimate !iter=100 *maximum no of iterations
show settings >-calV22set.txt

show !map=1 >-calV22_1map.txt


show !map=2 >-calV22_2map.txt
show !map=3 >-calV22_3map.txt

show !table=1 >-calV22_1tab.txt


show !table=2 >-calV22_2tab.txt
show !table=3 >-calV22_3tab.txt
show !table=4 >-calV22_4tab.txt

itanal >-calV22out.txt *non-IRT item analysis


bye *quits QUEST

Page 1
calV22_2map
Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Fit 11/10/11 16:30
all on all (N = 16 L = 21 Probability Level= .50)
----------------------------------------------------------------------------------------
INFIT
MNSQ .53 .63 .77 1.00 1.30 1.60 1.90
------------------+---------+---------+---------+---------+---------+---------+---------
1 item 1 . | * .
2 item 2 * . | .
3 item 3 . |* .
4 item 4 . | * .
5 item 5 . * | .
7 item 7 . * .
8 item 8 . | . *
9 item 9 . * | .
10 item 10 . *| .
11 item 11 . | * .
12 item 12 . * | .
14 item 14 . * | .
15 item 15 . |* .
16 item 16 . | * .
17 item 17 . * | .
18 item 18 . *| .
19 item 19 . * | .
20 item 20 . * .
21 item 21 . | * .
========================================================================================

Page 1
calV22_2map
Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Fit 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
INFIT
MNSQ .63 .71 .83 1.00 1.20 1.40 1.60
------------------+---------+---------+---------+---------+---------+---------+---------
1 item 1 . | * .
2 item 2 * . | .
3 item 3 . | * .
4 item 4 . | * .
5 item 5 . |* .
7 item 7 . |* .
9 item 9 . * | .
10 item 10 . * | .
11 item 11 . | . *
12 item 12 . * | .
14 item 14 . * | .
15 item 15 . * .
16 item 16 . | * .
17 item 17 . *| .
18 item 18 . * | .
19 item 19 .* | .
20 item 20 . * | .
21 item 21 . | * .
========================================================================================

Page 1
calV22_1map
Multiple-Choice Test Analysis: Calibrate Vr22
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
--------------------------------------------------------------------------------
--------
3.0 |
|
|
|
|
|
|
|
|
2.0 | 16.2 21.2
|
|
|
| 1.3 15 20
|
|
|
| 14
1.0 XX |
|
| 12
XXX |
| 18
|
X |
| 7 17.3
|
.0 XXXXX |
| 2
|
X |
|
|
X |
| 10
|
|
-1.0 | 3 19
|
|
X |
| 9
|
|
|
|
-2.0 | 4
|
|
X |
|
|
|
|
| 5 11
-3.0 |
--------------------------------------------------------------------------------
--------
Page 1
calV22_3tab
Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT OUTFT
| | 1 2 3 | MNSQ MNSQ t t
----------------------------------------------------------------------------------------
1 item 1 | 3 16 | 1.57 | 1.11 1.29 .4 .6
| | .67|
| | |
2 item 2 | 8 16 | -.04 | .69 .63 -1.8 -1.0
| | .56 |
| | |
3 item 3 | 11 16 | -1.03 | 1.14 1.44 .5 .9
| | .64 |
| | |
4 item 4 | 13 16 | -2.01 | 1.25 2.04 .6 1.2
| | .83 |
| | |
5 item 5 | 14 16 | -2.85 | 1.01 .40 .3 -.1
| | 1.10 |
| | |
6 item 6 | 0 0 | Item has perfect score |
| | |
| | |
7 item 7 | 7 16 | .25 | 1.02 1.09 .2 .3
| | .56 |
| | |
9 item 9 | 12 16 | -1.46 | .87 .71 -.2 -.3
| | .71 |
| | |
10 item 10 | 10 16 | -.66 | .97 .87 .0 -.2
| | .60 |
| | |
11 item 11 | 14 16 | -2.85 | 1.38 4.31 .7 1.8
| | 1.10 |
| | |
12 item 12 | 5 16 | .85 | .82 .72 -.9 -.5
| | .58 |
| | |
13 item 13 | 0 0 | Item has perfect score |
| | |
| | |
14 item 14 | 4 16 | 1.19 | .94 .80 -.1 -.2
| | .61 |
| | |
Page 1
calV22_3tab
15 item 15 | 3 16 | 1.57 | .99 .86 .1 .0
| | .67 |
| | |
16 item 16 | 2 16 | 2.06 | 1.11 1.50 .4 .8
| | .78 |
| | |
17 item 17 | 7 16 | .25 | .99 .92 .0 -.1
| | .56|
| | |
18 item 18 | 6 16 | .55 | .96 .88 -.2 -.1
| | .56 |
| | |
19 item 19 | 11 16 | -1.03 | .79 .69 -.5 -.5
| | .64 |
| | |
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT OUTFT
| | 1 2 3 | MNSQ MNSQ t t
----------------------------------------------------------------------------------------
20 item 20 | 3 16 | 1.57 | .93 .78 -.1 -.1
| | .67 |
| | |
21 item 21 | 2 16 | 2.06 | 1.19 1.44 .5 .7
| | .78 |
| | |
----------------------------------------------------------------------------------------
Mean | | .00 | 1.01 1.19 .0 .2
SD | | 1.60 | .17 .88 .6 .7
========================================================================================

Page 2
calV22_1tab
Multiple-Choice Test Analysis: Calibrate Vr22
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
--------------------------------------------------------------------------------
--------
Summary of item Estimates
=========================

Mean .00
SD 1.60
SD (adjusted) 1.43
Reliability of estimate .80

Fit Statistics
===============
Infit Mean Square Outfit Mean Square

Mean 1.01 Mean 1.19


SD .17 SD .88

Infit t Outfit t
Mean -.01 Mean .19
SD .61 SD .70
0 items with zero scores
2 items with perfect scores
================================================================================
========

Page 1
calV22_1map
Each X represents 1 students
================================================================================
========

Page 2
calV22out
Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 1: item 1 Infit MNSQ = 1.11
Disc = .13

Categories 0 1 2 3 missing

Count 1 0 0 12 3 0
Percent (%) 6.3 .0 .0 75.0 18.8
Pt-Biserial -.66 NA NA .25 .13
p-value .003 NA NA .174 .315
Mean Ability .00 NA NA .01 .19 NA

Step Labels 1 2 3

Thresholds 1.57
Error .67
........................................................................................
........................................................................................
Item 2: item 2 Infit MNSQ = .69
Disc = .68
Categories 0 1 2 3 missing
Count 1 7 8 0 0 0
Percent (%) 6.3 43.8 50.0 .0 .0
Pt-Biserial -.66 -.35 .66 NA NA
p-value .003 .095 .003 NA NA
Mean Ability .00 -.61 .63 NA NA NA
Step Labels 1
Thresholds -.04
Error .56
........................................................................................
........................................................................................
Item 3: item 3 Infit MNSQ = 1.14
Disc = .39
Categories 0 1 2 3 missing
Page 1
calV22out
Count 1 4 11 0 0 0
Percent (%) 6.3 25.0 68.8 .0 .0
Pt-Biserial -.66 -.03 .37 NA NA
p-value .003 .452 .076 NA NA
Mean Ability .00 -.27 .17 NA NA NA
Step Labels 1

Thresholds -1.03
Error .64
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 4: item 4 Infit MNSQ = 1.25
Disc = .42

Categories 0 1 2 3 missing

Count 1 2 13 0 0 0
Percent (%) 6.3 12.5 81.3 .0 .0
Pt-Biserial -.66 .01 .40 NA NA
p-value .003 .490 .061 NA NA
Mean Ability .00 -.14 .08 NA NA NA

Step Labels 1
Thresholds -2.01
Error .83
........................................................................................
........................................................................................

Item 5: item 5 Infit MNSQ = 1.01


Disc = .70
Categories 0 1 2 3 missing
Count 1 1 14 0 0 0
Percent (%) 6.3 6.3 87.5 .0 .0
Page 2
calV22out
Pt-Biserial -.66 -.27 .68 NA NA
p-value .003 .157 .002 NA NA
Mean Ability .00 -1.38 .15 NA NA NA
Step Labels 1

Thresholds -2.85
Error 1.10
........................................................................................
........................................................................................

Item 6: item 6 Infit MNSQ = .00


Disc = .00

Categories 0 1 2 3 missing

Count 1 0 15 0 0 0
Percent (%) 6.3 .0 93.8 .0 .0
Pt-Biserial -.66 NA .66 NA NA
p-value .003 NA .003 NA NA
Mean Ability .00 NA .05 NA NA NA

Step Labels 1

Thresholds
Error
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................

Item 7: item 7 Infit MNSQ = 1.02


Disc = .39
Categories 0 1 2 3 missing

Count 1 8 7 0 0 0
Percent (%) 6.3 50.0 43.8 .0 .0
Pt-Biserial -.66 -.06 .38 NA NA
p-value .003 .417 .074 NA NA
Mean Ability .00 -.22 .35 NA NA NA
Page 3
calV22out
Step Labels 1

Thresholds .25
Error .56
........................................................................................
........................................................................................

Item 9: item 9 Infit MNSQ = .87


Disc = .67

Categories 0 1 2 3 missing

Count 1 3 12 0 0 0
Percent (%) 6.3 18.8 75.0 .0 .0
Pt-Biserial -.66 -.31 .64 NA NA
p-value .003 .125 .004 NA NA
Mean Ability .00 -.95 .30 NA NA NA

Step Labels 1
Thresholds -1.46
Error .71
........................................................................................
........................................................................................
Item 10: item 10 Infit MNSQ = .97
Disc = .55
Categories 0 1 2 3 missing
Count 1 5 10 0 0 0
Percent (%) 6.3 31.3 62.5 .0 .0
Pt-Biserial -.66 -.21 .53 NA NA
p-value .003 .216 .017 NA NA
Mean Ability .00 -.54 .34 NA NA NA
Step Labels 1
Thresholds -.66
Error .60
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Page 4
calV22out
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 11: item 11 Infit MNSQ = 1.38
Disc = .29

Categories 0 1 2 3 missing

Count 1 1 14 0 0 0
Percent (%) 6.3 6.3 87.5 .0 .0
Pt-Biserial -.66 .28 .28 NA NA
p-value .003 .148 .148 NA NA
Mean Ability .00 1.09 -.03 NA NA NA

Step Labels 1

Thresholds -2.85
Error 1.10
........................................................................................
........................................................................................

Item 12: item 12 Infit MNSQ = .82


Disc = .50
Categories 0 1 2 3 missing
Count 1 10 5 0 0 0
Percent (%) 6.3 62.5 31.3 .0 .0
Pt-Biserial -.66 -.13 .48 NA NA
p-value .003 .313 .029 NA NA
Mean Ability .00 -.27 .69 NA NA NA
Step Labels 1
Thresholds .85
Error .58
........................................................................................
........................................................................................
Item 13: item 13 Infit MNSQ = .00
Disc = .00
Categories 0 1 2 3 missing
Count 1 0 15 0 0 0
Page 5
calV22out
Percent (%) 6.3 .0 93.8 .0 .0
Pt-Biserial -.66 NA .66 NA NA
p-value .003 NA .003 NA NA
Mean Ability .00 NA .05 NA NA NA
Step Labels 1
Thresholds
Error
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 14: item 14 Infit MNSQ = .94
Disc = .37
Categories 0 1 2 3 missing

Count 1 11 4 0 0 0
Percent (%) 6.3 68.8 25.0 .0 .0
Pt-Biserial -.66 .01 .36 NA NA
p-value .003 .489 .085 NA NA
Mean Ability .00 -.15 .59 NA NA NA
Step Labels 1

Thresholds 1.19
Error .61
........................................................................................
........................................................................................

Item 15: item 15 Infit MNSQ = .99


Disc = .28
Categories 0 1 2 3 missing
Count 1 12 3 0 0 0
Percent (%) 6.3 75.0 18.8 .0 .0
Pt-Biserial -.66 .12 .28 NA NA
p-value .003 .329 .151 NA NA
Page 6
calV22out
Mean Ability .00 -.07 .53 NA NA NA

Step Labels 1
Thresholds 1.57
Error .67
........................................................................................
........................................................................................

Item 16: item 16 Infit MNSQ = 1.11


Disc = .00

Categories 0 1 2 3 missing

Count 1 0 13 2 0 0
Percent (%) 6.3 .0 81.3 12.5 .0
Pt-Biserial -.66 NA .35 .06 NA
p-value .003 NA .089 .406 NA
Mean Ability .00 NA .04 .07 NA NA

Step Labels 1 2

Thresholds 2.06
Error .78
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 17: item 17 Infit MNSQ = .99
Disc = .43

Categories 0 1 2 3 missing
Count 1 0 0 8 7 0
Percent (%) 6.3 .0 .0 50.0 43.8
Pt-Biserial -.66 NA NA -.09 .42
p-value .003 NA NA .364 .054
Mean Ability .00 NA NA -.27 .42 NA
Step Labels 1 2 3
Page 7
calV22out
Thresholds .25
Error .56
........................................................................................
........................................................................................

Item 18: item 18 Infit MNSQ = .96


Disc = .42

Categories 0 1 2 3 missing

Count 1 9 6 0 0 0
Percent (%) 6.3 56.3 37.5 .0 .0
Pt-Biserial -.66 -.07 .41 NA NA
p-value .003 .393 .060 NA NA
Mean Ability .00 -.24 .48 NA NA NA

Step Labels 1

Thresholds .55
Error .56
........................................................................................
........................................................................................

Item 19: item 19 Infit MNSQ = .79


Disc = .68
Categories 0 1 2 3 missing
Count 1 4 11 0 0 0
Percent (%) 6.3 25.0 68.8 .0 .0
Pt-Biserial -.66 -.34 .66 NA NA
p-value .003 .100 .003 NA NA
Mean Ability .00 -.88 .39 NA NA NA
Step Labels 1
Thresholds -1.03
Error .64
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Calibrate Vr22
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 11/10/11 16:36
all on all (N = 16 L = 20 Probability Level= .50)
Page 8
calV22out
----------------------------------------------------------------------------------------
........................................................................................

Item 20: item 20 Infit MNSQ = .93


Disc = .33

Categories 0 1 2 3 missing

Count 1 9 3 3 0 0
Percent (%) 6.3 56.3 18.8 18.8 .0
Pt-Biserial -.66 .00 .32 .08 NA
p-value .003 .497 .110 .382 NA
Mean Ability .00 -.15 .64 .04 NA NA

Step Labels 1

Thresholds 1.57
Error .67
........................................................................................
........................................................................................
Item 21: item 21 Infit MNSQ = 1.19
Disc = .00

Categories 0 1 2 3 missing
Count 1 0 13 2 0 0
Percent (%) 6.3 .0 81.3 12.5 .0
Pt-Biserial -.66 NA .40 .01 NA
p-value .003 NA .061 .490 NA
Mean Ability .00 NA .07 -.08 NA NA
Step Labels 1 2
Thresholds 2.06
Error .78
........................................................................................
Mean test score 8.44
Standard deviation 3.20
Internal Consistency .72
The individual item statistics are calculated
using all available data.

The overall mean, standard deviation and internal


consistency indices assume that missing responses
Page 9
calV22out
are incorrect. They should only be considered useful when
there is a limited amount of missing data.
========================================================================================

Page 10
Appendix I(3)

Print-out of the Quest analysis for pilot test-2: round-1 data.

Investigating ICT-literacy assessment tool:


Developing and validating a new assessment instrument for trainee teachers in Malaysia Page 235
pilot1ctl
Title Multiple-Choice Test Analysis: Pilot 1
* 1 2 3 4 5 6 7
*234567890123456789012345678901234567890123456789012345678901234567890
*Command lines cannot exceed 70 characters
*If last character of line is - then next line is a
*continuation not a new line (360 characters maximum)
set logon >-pilot1log.txt *creates a log file
set width=88 !page *puts 88 characters in each line of output
data_file <<pilot1data.txt
codes " 01234" *treats blank as an expected wrong response
format code 1-5 gender 6 frequency 7 items 8-26
* 1 [comment line]
* 1234567890123456789 [comment line]
key 1111111111111111111 ! score=1
key 2xxxx22xxx22xx22xx2 ! score=2
key 3xxxxxxxxx3xxxx3xxx ! score=3
key xxxxxxxxxx4xxxxxxxx ! score=4
*key gives the correct answer for each item
*delete !items <<pilot1del.txt
estimate !iter=100 *maximum no of iterations
show settings >-pilot1set.txt

show !map=1 >-pilot1_1map.txt


show !map=2 >-pilot1_2map.txt
show !map=3 >-pilot1_3map.txt

show !table=1 >-pilot1_1tab.txt


show !table=2 >-pilot1_2tab.txt
show !table=3 >-pilot1_3tab.txt
show !table=4 >-pilot1_4tab.txt

itanal >-pilot1out.txt *non-IRT item analysis


bye *quits QUEST

Page 1
pilot1_2map
Multiple-Choice Test Analysis: Pilot 1
----------------------------------------------------------------------------------------
Item Fit 13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
----------------------------------------------------------------------------------------
INFIT
MNSQ .53 .63 .77 1.00 1.30 1.60 1.90
------------------+---------+---------+---------+---------+---------+---------+---------
1 item 1 . | *.
2 item 2 . * .
3 item 3 * | .
4 item 4 * . | .
5 item 5 . | *.
7 item 7 . * .
8 item 8 . *| .
9 item 9 . | * .
10 item 10 * . | .
11 item 11 . | . *
12 item 12 . | * .
13 item 13 . * | .
14 item 14 . | *.
15 item 15 * . | .
16 item 16 * . | .
17 item 17 . * | .
18 item 18 . |* .
19 item 19 . | * .
========================================================================================

Page 1
pilot1_1map
Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
4.0 |
|
|
| 12.2
|
| 11.4
|
|
3.0 |
|
|
|
|
|
|
|
2.0 |
| 7.2
|
XXXX |
|
|
XXXX |
| 13 14
1.0 XXXX | 19.2
|
X |
| 11.3
XX | 16.3 18
|
XX |
.0 | 8
X |
| 16.2 17
| 1.3
|
| 1.2 2
|
| 3 9 11.2
-1.0 X | 16.1
|
| 4 10
|
|
|
|
| 15.2
-2.0 |
|
X |
|
|
| 5
|
|
-3.0 |
--------------------------------------------------------------------------------
--------
Page 1
pilot1_3tab
Multiple-Choice Test Analysis: Pilot 1
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT
| | 1 2 3 4 | MNSQ MNSQ t
----------------------------------------------------------------------------------------
1 item 1 | 33 40 | -.59 -.36 | 1.26 .66 .6
| | 1.19 1.17 |
| | |
2 item 2 | 15 20 | -.53 | 1.00 .84 .1
| | .57 |
| | |
3 item 3 | 16 20 | -.86 | .78 .77 -.5
| | .62 |
| | |
4 item 4 | 17 20 | -1.26 | .59 .37 -.8
| | .69 |
| | |
5 item 5 | 19 20 | -2.64 | 1.28 .86 .6
| | 1.10 |
| | |
6 item 6 | 0 0 | Item has perfect score |
| | |
| | |
7 item 7 | 5 20 | 1.85 | 1.00 1.00 .1
| | .54 |
| | |
8 item 8 | 13 20 | .02 | .98 .92 .0
| | .51 |
| | |
9 item 9 | 16 20 | -.86 | 1.10 1.12 .4
| | .62 |
| | |
10 item 10 | 17 20 | -1.26 | .62 .41 -.8
| | .69 |
| | |
11 item 11 | 29 60 | -.81 .53 3.30 | 1.61 2.25 1.7
| | 1.03 .90 1.34|
| | |
12 item 12 | 1 20 | 3.66 | 1.09 2.07 .4
| | 1.03 |
| | |
13 item 13 | 8 20 | 1.13 | .93 1.13 -.5
| | .48 |
| | |
Page 1
pilot1_3tab
14 item 14 | 8 20 | 1.13 | 1.28 1.22 1.9
| | .48 |
| | |
15 item 15 | 18 20 | -1.80 | .51 .22 -.8
| | .82 |
| | |
16 item 16 | 46 60 | -.94 -.26 .39 | .67 .60 -.8
| | 1.19 1.05 .94 |
| | |
17 item 17 | 14 20 | -.24 | .85 .87 -.5
| | .54 |
| | |
18 item 18 | 11 20 | .48 | 1.04 1.00 .3
| | .48 |
| | |
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT
| | 1 2 3 4 | MNSQ MNSQ t
----------------------------------------------------------------------------------------
19 item 19 | 9 20 | .91 | 1.19 1.12 1.4
| | .48 |
| | |
----------------------------------------------------------------------------------------
Mean | | .00 | .99 .97 .2
SD | | 1.48 | .29 .52 .9
========================================================================================

-------

-------
OUTFT
t
-------
-.1

-.2

Page 2
pilot1_3tab
-.3

-1.0

.4

.2

-.1

.4

-.9

2.4

1.0

.4

.6

-1.0

-.7

-.2

.1
Page 3
pilot1_3tab

========================================================================================
*****Output Continues****

-------

-------
OUTFT
t
-------
.4

-------
.1
.8
========================================================================================

Page 4
pilot1_1tab
Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
Summary of item Estimates
=========================
Mean .00
SD 1.48
SD (adjusted) 1.26
Reliability of estimate .73

Fit Statistics
===============
Infit Mean Square Outfit Mean Square

Mean .99 Mean .97


SD .29 SD .52

Infit t Outfit t

Mean .16 Mean .08


SD .86 SD .83

0 items with zero scores


1 items with perfect scores
================================================================================
========

Page 1
pilot1out
Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
................................................................................
........
Item 1: item 1 Infit MNSQ = 1.26
Disc = .59
Categories 0 1 2 3 4
missing

Count 0 0 3 1 16 0
0
Percent (%) .0 .0 15.0 5.0 80.0 .0
Pt-Biserial NA NA -.55 -.10 .55 NA
p-value NA NA .006 .331 .006 NA
Mean Ability NA NA -.61 .19 .93 NA
NA
Step Labels 1 2 3
Thresholds -.59 -.36
Error 1.19 1.17
................................................................................
........
................................................................................
........
Item 2: item 2 Infit MNSQ = 1.00
Disc = .42
Categories 0 1 2 3 4
missing
Count 0 5 15 0 0 0
0
Percent (%) .0 25.0 75.0 .0 .0 .0
Pt-Biserial NA -.41 .41 NA NA NA
p-value NA .035 .035 NA NA NA
Mean Ability NA -.06 .90 NA NA NA
NA
Step Labels 1
Thresholds -.53
Error .57
................................................................................
........
................................................................................
........

Item 3: item 3 Infit MNSQ = .78


Disc = .63
Categories 0 1 2 3 4
missing
Count 0 4 16 0 0 0
0
Percent (%) .0 20.0 80.0 .0 .0 .0
Page 1
pilot1out
Pt-Biserial NA -.62 .62 NA NA NA
p-value NA .002 .002 NA NA NA
Mean Ability NA -.45 .94 NA NA NA
NA
Step Labels 1
Thresholds -.86
Error .62
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
................................................................................
........
Item 4: item 4 Infit MNSQ = .59
Disc = .83
Categories 0 1 2 3 4
missing

Count 0 3 17 0 0 0
0
Percent (%) .0 15.0 85.0 .0 .0 .0
Pt-Biserial NA -.81 .81 NA NA NA
p-value NA .000 .000 NA NA NA
Mean Ability NA -1.07 .97 NA NA NA
NA
Step Labels 1
Thresholds -1.26
Error .69
................................................................................
........
................................................................................
........
Item 5: item 5 Infit MNSQ = 1.28
Disc = .17
Categories 0 1 2 3 4
missing
Count 0 1 19 0 0 0
0
Percent (%) .0 5.0 95.0 .0 .0 .0
Pt-Biserial NA -.16 .16 NA NA NA
p-value NA .245 .245 NA NA NA
Mean Ability NA -.01 .70 NA NA NA
NA
Step Labels 1
Thresholds -2.64
Page 2
pilot1out
Error 1.10
................................................................................
........
................................................................................
........
Item 6: item 6 Infit MNSQ = .00
Disc = .00

Categories 0 1 2 3 4
missing
Count 0 0 20 0 0 0
0
Percent (%) .0 .0 100.0 .0 .0 .0
Pt-Biserial NA NA .00 NA NA NA
p-value NA NA .500 NA NA NA
Mean Ability NA NA .66 NA NA NA
NA
Step Labels 1

Thresholds
Error
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1

--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)

--------------------------------------------------------------------------------
--------
................................................................................
........
Item 7: item 7 Infit MNSQ = 1.00
Disc = .00
Categories 0 1 2 3 4
missing

Count 0 0 15 5 0 0
0
Percent (%) .0 .0 75.0 25.0 .0 .0
Pt-Biserial NA NA -.22 .22 NA NA
p-value NA NA .179 .179 NA NA
Mean Ability NA NA .54 1.04 NA NA
NA
Step Labels 1 2
Thresholds 1.85
Error .54
................................................................................
........
................................................................................
........
Item 8: item 8 Infit MNSQ = .98
Disc = .43
Page 3
pilot1out
Categories 0 1 2 3 4
missing
Count 0 7 13 0 0 0
0
Percent (%) .0 35.0 65.0 .0 .0 .0
Pt-Biserial NA -.42 .42 NA NA NA
p-value NA .034 .034 NA NA NA
Mean Ability NA .13 .95 NA NA NA
NA
Step Labels 1

Thresholds .02
Error .51
................................................................................
........
................................................................................
........
Item 9: item 9 Infit MNSQ = 1.10
Disc = .30
Categories 0 1 2 3 4
missing
Count 0 4 16 0 0 0
0
Percent (%) .0 20.0 80.0 .0 .0 .0
Pt-Biserial NA -.29 .29 NA NA NA
p-value NA .106 .106 NA NA NA
Mean Ability NA .07 .81 NA NA NA
NA
Step Labels 1
Thresholds -.86
Error .62
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
................................................................................
........

Item 10: item 10 Infit MNSQ = .62


Disc = .79
Categories 0 1 2 3 4
missing
Count 0 3 17 0 0 0
0
Percent (%) .0 15.0 85.0 .0 .0 .0
Pt-Biserial NA -.77 .77 NA NA NA
Page 4
pilot1out
p-value NA .000 .000 NA NA NA
Mean Ability NA -1.01 .96 NA NA NA
NA
Step Labels 1
Thresholds -1.26
Error .69
................................................................................
........
................................................................................
........
Item 11: item 11 Infit MNSQ = 1.61
Disc = .12
Categories 0 1 2 3 4
missing

Count 0 0 3 6 10 1
0
Percent (%) .0 .0 15.0 30.0 50.0 5.0
Pt-Biserial NA NA -.05 -.04 -.01 .19
p-value NA NA .425 .429 .478 .207
Mean Ability NA NA .58 .59 .64 1.55
NA
Step Labels 1 2 3 4
Thresholds -.81 .53 3.30
Error 1.03 .90 1.34
................................................................................
........
................................................................................
........
Item 12: item 12 Infit MNSQ = 1.09
Disc = .00
Categories 0 1 2 3 4
missing
Count 0 0 19 1 0 0
0
Percent (%) .0 .0 95.0 5.0 .0 .0
Pt-Biserial NA NA .10 -.10 NA NA
p-value NA NA .331 .331 NA NA
Mean Ability NA NA .69 .19 NA NA
NA
Step Labels 1 2
Thresholds 3.66
Error 1.03
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1

--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)

Page 5
pilot1out
--------------------------------------------------------------------------------
--------
................................................................................
........
Item 13: item 13 Infit MNSQ = .93
Disc = .27
Categories 0 1 2 3 4
missing
Count 0 12 8 0 0 0
0
Percent (%) .0 60.0 40.0 .0 .0 .0
Pt-Biserial NA -.27 .27 NA NA NA
p-value NA .129 .129 NA NA NA
Mean Ability NA .41 1.03 NA NA NA
NA

Step Labels 1
Thresholds 1.13
Error .48
................................................................................
........
................................................................................
........
Item 14: item 14 Infit MNSQ = 1.28
Disc = .08
Categories 0 1 2 3 4
missing
Count 0 12 8 0 0 0
0
Percent (%) .0 60.0 40.0 .0 .0 .0
Pt-Biserial NA -.08 .08 NA NA NA
p-value NA .369 .369 NA NA NA
Mean Ability NA .62 .72 NA NA NA
NA
Step Labels 1
Thresholds 1.13
Error .48
................................................................................
........
................................................................................
........
Item 15: item 15 Infit MNSQ = .51
Disc = .00

Categories 0 1 2 3 4
missing

Count 0 0 2 18 0 0
0
Percent (%) .0 .0 10.0 90.0 .0 .0
Pt-Biserial NA NA -.84 .84 NA NA
p-value NA NA .000 .000 NA NA
Mean Ability NA NA -1.60 .91 NA NA
NA
Step Labels 1 2

Thresholds -1.80
Page 6
pilot1out
Error .82
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
................................................................................
........
Item 16: item 16 Infit MNSQ = .67
Disc = .82
Categories 0 1 2 3 4
missing

Count 0 2 2 4 12 0
0
Percent (%) .0 10.0 10.0 20.0 60.0 .0
Pt-Biserial NA -.84 .02 -.13 .61 NA
p-value NA .000 .464 .293 .002 NA
Mean Ability NA -1.60 .65 .36 1.14 NA
NA

Step Labels 1 2 3
Thresholds -.94 -.26 .39
Error 1.19 1.05 .94
................................................................................
........
................................................................................
........

Item 17: item 17 Infit MNSQ = .85


Disc = .57
Categories 0 1 2 3 4
missing
Count 0 6 14 0 0 0
0
Percent (%) .0 30.0 70.0 .0 .0 .0
Pt-Biserial NA -.55 .55 NA NA NA
p-value NA .006 .006 NA NA NA
Mean Ability NA -.10 .99 NA NA NA
NA
Step Labels 1
Thresholds -.24
Error .54
................................................................................
........
................................................................................
........

Item 18: item 18 Infit MNSQ = 1.04


Disc = .34
Page 7
pilot1out
Categories 0 1 2 3 4
missing
Count 0 9 11 0 0 0
0
Percent (%) .0 45.0 55.0 .0 .0 .0
Pt-Biserial NA -.33 .33 NA NA NA
p-value NA .076 .076 NA NA NA
Mean Ability NA .33 .94 NA NA NA
NA
Step Labels 1

Thresholds .48
Error .48
................................................................................
........
================================================================================
========
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 1
--------------------------------------------------------------------------------
--------
Item Analysis Results for Observed Responses
13/10/11 18:46
all on all (N = 20 L = 19 Probability Level= .50)
--------------------------------------------------------------------------------
--------
................................................................................
........
Item 19: item 19 Infit MNSQ = 1.19
Disc = .00

Categories 0 1 2 3 4
missing

Count 0 0 11 9 0 0
0
Percent (%) .0 .0 55.0 45.0 .0 .0
Pt-Biserial NA NA -.19 .19 NA NA
p-value NA NA .212 .212 NA NA
Mean Ability NA NA .53 .82 NA NA
NA

Step Labels 1 2
Thresholds .91
Error .48
................................................................................
........
Mean test score 14.75
Standard deviation 3.75
Internal Consistency .70
The individual item statistics are calculated
using all available data.
The overall mean, standard deviation and internal
consistency indices assume that missing responses
are incorrect. They should only be considered useful when
there is a limited amount of missing data.
================================================================================
Page 8
pilot1_1map
Each X represents 1 students
================================================================================
========

Page 2
Appendix I(4)

Print-out of the Quest analysis for pilot test-2: round-2 data.

Investigating ICT-literacy assessment tool:


Page 236 Developing and validating a new assessment instrument for trainee teachers in Malaysia
pilot2ctl
Title Multiple-Choice Test Analysis: Pilot 2
* 1 2 3 4 5 6 7
*234567890123456789012345678901234567890123456789012345678901234567890
*Command lines cannot exceed 70 characters
*If last character of line is - then next line is a
*continuation not a new line (360 characters maximum)
set logon >-pilot2log.txt *creates a log file
set width=88 !page *puts 88 characters in each line of output
data_file <<pilot2data.txt
codes " 01234" *treats blank as an expected wrong response
format code 1-5 gender 6 frequency 7 items 8-25
* 1 [comment line]
* 123456789012345678 [comment line]
key 111111111111111111 ! score=1
key 2xxxx2xxx22xx22x22 ! score=2
key 3xxxxxxxx3xxxx3xxx ! score=3
key xxxxxxxxx4xxxxxxxx ! score=4
*key gives the correct answer for each item
*delete !items <<pilot2del.txt
estimate !iter=100 *maximum no of iterations
show settings >-pilot2set.txt

show !map=1 >-pilot2_1map.txt


show !map=2 >-pilot2_2map.txt
show !map=3 >-pilot2_3map.txt

show !table=1 >-pilot2_1tab.txt


show !table=2 >-pilot2_2tab.txt
show !table=3 >-pilot2_3tab.txt
show !table=4 >-pilot2_4tab.txt
kidmap all>-pilot2_kid.txt
kidmap all >-GROUPkids.kid
itanal >-pilot2out.txt *non-IRT item analysis
bye *quits QUEST

Page 1
pilot2_2map
Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Fit 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
INFIT
MNSQ .53 .63 .77 1.00 1.30 1.60 1.90
------------------+---------+---------+---------+---------+---------+---------+---------
1 item 1 . | *.
2 item 2 . *| .
3 item 3 * | .
4 item 4 * . | .
5 item 5 . | * .
6 item 6 . |* .
7 item 7 . *| .
8 item 8 . | * .
9 item 9 * . | .
10 item 10 . | . *
11 item 11 . | * .
12 item 12 . * | .
13 item 13 . | *
14 item 14 * . | .
15 item 15 * . | .
16 item 16 . * | .
17 item 17 . | * .
18 item 18 . | * .
========================================================================================

Page 1
pilot2_1map
Multiple-Choice Test Analysis: Pilot 2
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
--------------------------------------------------------------------------------
--------
4.0 |
|
|
| 11.2
|
|
| 10.4
|
3.0 |
|
|
|
|
|
|
|
2.0 |
| 6.2
|
XXX |
|
|
XXXXX | 12 13
|
1.0 XXXX | 18.2
|
X |
| 10.3 17.2
XX | 15.3
X |
|
.0 XX | 7
|
| 15.2 16
| 1.3
| 2
| 1.2
|
| 3 8 10.2 17.1
-1.0 | 15.1
X |
| 4 9
|
|
|
| 14.2
X |
-2.0 |
|
|
|
| 5
|
|
|
-3.0 |
--------------------------------------------------------------------------------
--------
Page 1
pilot2_3tab
Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT
| | 1 2 3 4 | MNSQ MNSQ t
----------------------------------------------------------------------------------------
1 item 1 | 33 40 | -.55 -.31 | 1.27 .70 .7
| | 1.17 1.14 |
| | |
2 item 2 | 15 20 | -.50 | .99 .85 .1
| | .56 |
| | |
3 item 3 | 16 20 | -.82 | .77 .78 -.5
| | .61 |
| | |
4 item 4 | 17 20 | -1.21 | .59 .38 -.9
| | .68 |
| | |
5 item 5 | 19 20 | -2.53 | 1.19 .83 .5
| | 1.08 |
| | |
6 item 6 | 5 20 | 1.86 | 1.02 1.00 .2
| | .53 |
| | |
7 item 7 | 13 20 | .04 | .98 .94 .0
| | .51 |
| | |
8 item 8 | 16 20 | -.82 | 1.10 1.14 .4
| | .61 |
| | |
9 item 9 | 17 20 | -1.21 | .59 .38 -.9
| | .68 |
| | |
10 item 10 | 29 60 | -.78 .56 3.29 | 1.60 1.96 1.7
| | 1.03 .90 1.34|
| | |
11 item 11 | 1 20 | 3.67 | 1.09 2.33 .4
| | 1.03 |
| | |
12 item 12 | 8 20 | 1.15 | .95 1.17 -.3
| | .48 |
| | |
13 item 13 | 8 20 | 1.15 | 1.29 1.23 2.0
| | .48 |
| | |
Page 1
pilot2_3tab
14 item 14 | 18 20 | -1.73 | .52 .22 -.8
| | .80 |
| | |
15 item 15 | 46 60 | -.91 -.23 .41 | .66 .61 -.8
| | 1.19 1.05 .93 |
| | |
16 item 16 | 14 20 | -.21 | .80 .76 -.7
| | .53 |
| | |
17 item 17 | 28 40 | -.78 .56 | 1.19 1.25 .7
| | 1.06 .90 |
| | |
18 item 18 | 9 20 | .93 | 1.15 1.08 1.1
| | .48 |
| | |
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Estimates (Thresholds) In input Order 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
ITEM NAME |SCORE MAXSCR| THRESHOLD/S | INFT OUTFT INFT
| | 1 2 3 4 | MNSQ MNSQ t
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
Mean | | .00 | .99 .98 .1
SD | | 1.45 | .29 .52 .9
========================================================================================

-------

-------
OUTFT
t
-------
-.1

-.2

-.3

Page 2
pilot2_3tab
-1.0

.3

.2

-.1

.4

-1.0

2.0

1.2

.6

.7

-1.1

-.7

-.5

.7

.3

========================================================================================
Page 3
pilot2_3tab
*****Output Continues****

-------

-------
OUTFT
t
-------
-------
.1
.8
========================================================================================

Page 4
pilot2_1tab
Multiple-Choice Test Analysis: Pilot 2
--------------------------------------------------------------------------------
--------
Item Estimates (Thresholds)
20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
--------------------------------------------------------------------------------
--------
Summary of item Estimates
=========================
Mean .00
SD 1.45
SD (adjusted) 1.22
Reliability of estimate .71

Fit Statistics
===============
Infit Mean Square Outfit Mean Square

Mean .99 Mean .98


SD .29 SD .52

Infit t Outfit t

Mean .15 Mean .08


SD .88 SD .81

0 items with zero scores


0 items with perfect scores
================================================================================
========

Page 1
pilot2out
Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 1: item 1 Infit MNSQ = 1.27
Disc = .58

Categories 0 1 2 3 4 missing

Count 0 0 3 1 16 0 0
Percent (%) .0 .0 15.0 5.0 80.0 .0
Pt-Biserial NA NA -.52 -.15 .55 NA
p-value NA NA .009 .263 .006 NA
Mean Ability NA NA -.43 .08 .94 NA NA

Step Labels 1 2 3

Thresholds -.55 -.31


Error 1.17 1.14
........................................................................................
........................................................................................
Item 2: item 2 Infit MNSQ = .99
Disc = .42
Categories 0 1 2 3 4 missing
Count 0 5 15 0 0 0 0
Percent (%) .0 25.0 75.0 .0 .0 .0
Pt-Biserial NA -.41 .41 NA NA NA
p-value NA .037 .037 NA NA NA
Mean Ability NA .03 .91 NA NA NA NA
Step Labels 1
Thresholds -.50
Error .56
........................................................................................
........................................................................................
Item 3: item 3 Infit MNSQ = .77
Disc = .63
Categories 0 1 2 3 4 missing
Page 1
pilot2out
Count 0 4 16 0 0 0 0
Percent (%) .0 20.0 80.0 .0 .0 .0
Pt-Biserial NA -.61 .61 NA NA NA
p-value NA .002 .002 NA NA NA
Mean Ability NA -.34 .95 NA NA NA NA
Step Labels 1

Thresholds -.82
Error .61
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 4: item 4 Infit MNSQ = .59
Disc = .83

Categories 0 1 2 3 4 missing

Count 0 3 17 0 0 0 0
Percent (%) .0 15.0 85.0 .0 .0 .0
Pt-Biserial NA -.81 .81 NA NA NA
p-value NA .000 .000 NA NA NA
Mean Ability NA -.93 .98 NA NA NA NA

Step Labels 1
Thresholds -1.21
Error .68
........................................................................................
........................................................................................

Item 5: item 5 Infit MNSQ = 1.19


Disc = .15
Categories 0 1 2 3 4 missing
Count 0 1 19 0 0 0 0
Percent (%) .0 5.0 95.0 .0 .0 .0
Page 2
pilot2out
Pt-Biserial NA -.15 .15 NA NA NA
p-value NA .263 .263 NA NA NA
Mean Ability NA .08 .73 NA NA NA NA
Step Labels 1

Thresholds -2.53
Error 1.08
........................................................................................
........................................................................................

Item 6: item 6 Infit MNSQ = 1.02


Disc = .00

Categories 0 1 2 3 4 missing

Count 0 0 15 5 0 0 0
Percent (%) .0 .0 75.0 25.0 .0 .0
Pt-Biserial NA NA -.20 .20 NA NA
p-value NA NA .194 .194 NA NA
Mean Ability NA NA .59 1.01 NA NA NA

Step Labels 1 2

Thresholds 1.86
Error .53
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................

Item 7: item 7 Infit MNSQ = .98


Disc = .41
Categories 0 1 2 3 4 missing

Count 0 7 13 0 0 0 0
Percent (%) .0 35.0 65.0 .0 .0 .0
Pt-Biserial NA -.40 .40 NA NA NA
p-value NA .039 .039 NA NA NA
Mean Ability NA .21 .95 NA NA NA NA
Page 3
pilot2out
Step Labels 1

Thresholds .04
Error .51
........................................................................................
........................................................................................

Item 8: item 8 Infit MNSQ = 1.10


Disc = .27

Categories 0 1 2 3 4 missing

Count 0 4 16 0 0 0 0
Percent (%) .0 20.0 80.0 .0 .0 .0
Pt-Biserial NA -.27 .27 NA NA NA
p-value NA .129 .129 NA NA NA
Mean Ability NA .20 .82 NA NA NA NA

Step Labels 1
Thresholds -.82
Error .61
........................................................................................
........................................................................................
Item 9: item 9 Infit MNSQ = .59
Disc = .83
Categories 0 1 2 3 4 missing
Count 0 3 17 0 0 0 0
Percent (%) .0 15.0 85.0 .0 .0 .0
Pt-Biserial NA -.81 .81 NA NA NA
p-value NA .000 .000 NA NA NA
Mean Ability NA -.93 .98 NA NA NA NA
Step Labels 1
Thresholds -1.21
Error .68
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Page 4
pilot2out
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 10: item 10 Infit MNSQ = 1.60
Disc = .11

Categories 0 1 2 3 4 missing

Count 0 0 3 6 10 1 0
Percent (%) .0 .0 15.0 30.0 50.0 5.0
Pt-Biserial NA NA -.03 -.04 -.03 .20
p-value NA NA .453 .427 .458 .202
Mean Ability NA NA .64 .62 .66 1.57 NA

Step Labels 1 2 3 4

Thresholds -.78 .56 3.29


Error 1.03 .90 1.34
........................................................................................
........................................................................................

Item 11: item 11 Infit MNSQ = 1.09


Disc = .00
Categories 0 1 2 3 4 missing
Count 0 0 19 1 0 0 0
Percent (%) .0 .0 95.0 5.0 .0 .0
Pt-Biserial NA NA .15 -.15 NA NA
p-value NA NA .263 .263 NA NA
Mean Ability NA NA .73 .08 NA NA NA
Step Labels 1 2
Thresholds 3.67
Error 1.03
........................................................................................
........................................................................................
Item 12: item 12 Infit MNSQ = .95
Disc = .24
Categories 0 1 2 3 4 missing
Count 0 12 8 0 0 0 0
Page 5
pilot2out
Percent (%) .0 60.0 40.0 .0 .0 .0
Pt-Biserial NA -.24 .24 NA NA NA
p-value NA .157 .157 NA NA NA
Mean Ability NA .48 1.01 NA NA NA NA
Step Labels 1
Thresholds 1.15
Error .48
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 13: item 13 Infit MNSQ = 1.29
Disc = .06
Categories 0 1 2 3 4 missing

Count 0 12 8 0 0 0 0
Percent (%) .0 60.0 40.0 .0 .0 .0
Pt-Biserial NA -.06 .06 NA NA NA
p-value NA .406 .406 NA NA NA
Mean Ability NA .69 .70 NA NA NA NA
Step Labels 1

Thresholds 1.15
Error .48
........................................................................................
........................................................................................

Item 14: item 14 Infit MNSQ = .52


Disc = .00
Categories 0 1 2 3 4 missing
Count 0 0 2 18 0 0 0
Percent (%) .0 .0 10.0 90.0 .0 .0
Pt-Biserial NA NA -.85 .85 NA NA
p-value NA NA .000 .000 NA NA
Page 6
pilot2out
Mean Ability NA NA -1.43 .93 NA NA NA

Step Labels 1 2
Thresholds -1.73
Error .80
........................................................................................
........................................................................................

Item 15: item 15 Infit MNSQ = .66


Disc = .82

Categories 0 1 2 3 4 missing

Count 0 2 2 4 12 0 0
Percent (%) .0 10.0 10.0 20.0 60.0 .0
Pt-Biserial NA -.85 .03 -.14 .61 NA
p-value NA .000 .444 .279 .002 NA
Mean Ability NA -1.43 .71 .38 1.15 NA NA

Step Labels 1 2 3

Thresholds -.91 -.23 .41


Error 1.19 1.05 .93
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
----------------------------------------------------------------------------------------
........................................................................................
Item 16: item 16 Infit MNSQ = .80
Disc = .61

Categories 0 1 2 3 4 missing
Count 0 6 14 0 0 0 0
Percent (%) .0 30.0 70.0 .0 .0 .0
Pt-Biserial NA -.60 .60 NA NA NA
p-value NA .003 .003 NA NA NA
Mean Ability NA -.09 1.03 NA NA NA NA
Step Labels 1
Page 7
pilot2out
Thresholds -.21
Error .53
........................................................................................
........................................................................................

Item 17: item 17 Infit MNSQ = 1.19


Disc = .41

Categories 0 1 2 3 4 missing

Count 0 3 6 11 0 0 0
Percent (%) .0 15.0 30.0 55.0 .0 .0
Pt-Biserial NA -.31 -.15 .37 NA NA
p-value NA .090 .258 .056 NA NA
Mean Ability NA .09 .47 .98 NA NA NA

Step Labels 1 2

Thresholds -.78 .56


Error 1.06 .90
........................................................................................
........................................................................................

Item 18: item 18 Infit MNSQ = 1.15


Disc = .00
Categories 0 1 2 3 4 missing
Count 0 0 11 9 0 0 0
Percent (%) .0 .0 55.0 45.0 .0 .0
Pt-Biserial NA NA -.22 .22 NA NA
p-value NA NA .177 .177 NA NA
Mean Ability NA NA .55 .87 NA NA NA
Step Labels 1 2
Thresholds .93
Error .48
........................................................................................
========================================================================================
*****Output Continues****

Multiple-Choice Test Analysis: Pilot 2
----------------------------------------------------------------------------------------
Item Analysis Results for Observed Responses 20/10/11 20: 4
all on all (N = 20 L = 18 Probability Level= .50)
Page 8
pilot2out
----------------------------------------------------------------------------------------

Mean test score 15.60


Standard deviation 3.85
Internal Consistency .70

The individual item statistics are calculated


using all available data.

The overall mean, standard deviation and internal


consistency indices assume that missing responses
are incorrect. They should only be considered useful when
there is a limited amount of missing data.
========================================================================================

Page 9
pilot2_1map
Each X represents 1 students
================================================================================
========

Page 2

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy