0% found this document useful (0 votes)
3 views10 pages

Sergeetal 2013

The study investigates the effects of static and adaptive performance feedback in game-based training, focusing on how feedback specificity influences learning in virtual environments. Results indicate that participants receiving detailed feedback from the start showed faster performance improvement compared to those receiving general feedback, while adaptive feedback conditions also enhanced learning outcomes. The findings suggest that tailored feedback based on trainee performance can optimize skill acquisition in training scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views10 pages

Sergeetal 2013

The study investigates the effects of static and adaptive performance feedback in game-based training, focusing on how feedback specificity influences learning in virtual environments. Results indicate that participants receiving detailed feedback from the start showed faster performance improvement compared to those receiving general feedback, while adaptive feedback conditions also enhanced learning outcomes. The findings suggest that tailored feedback based on trainee performance can optimize skill acquisition in training scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/257252786

The effects of static and adaptive performance feedback in game-based


training

Article in Computers in Human Behavior · May 2013


DOI: 10.1016/j.chb.2012.10.007

CITATIONS READS

61 346

4 authors, including:

Stephen Serge Paula J Durlach


University of Central Florida Retired
6 PUBLICATIONS 85 CITATIONS 95 PUBLICATIONS 2,975 CITATIONS

SEE PROFILE SEE PROFILE

Cheryl Johnson
Leidos
40 PUBLICATIONS 1,531 CITATIONS

SEE PROFILE

All content following this page was uploaded by Cheryl Johnson on 02 February 2023.

The user has requested enhancement of the downloaded file.


Computers in Human Behavior 29 (2013) 1150–1158

Contents lists available at SciVerse ScienceDirect

Computers in Human Behavior


journal homepage: www.elsevier.com/locate/comphumbeh

The effects of static and adaptive performance feedback in game-based training


Stephen R. Serge a,b,⇑,1, Heather A. Priest c,1, Paula J. Durlach c,1, Cheryl I. Johnson c,1
a
University of Central Florida, Department of Psychology, P.O. Box 161390, Orlando, FL 32816-1390, USA
b
Consortium Research Fellows Program, U.S. Army Research Institute, 12423 Research Pkwy, Orlando, FL 32826-3276, USA
c
U.S. Army Research Institute for the Behavioral and Social Sciences, 12423 Research Pkwy, Orlando, FL 32826-3276, USA

a r t i c l e i n f o a b s t r a c t

Article history: Training in virtual environments (VEs) has the potential to establish mental models and task mastery
Available online 11 November 2012 while providing a safe environment in which to practice. Performance feedback is known to contribute
to this learning; however, the most effective ways to provide feedback in VEs have not been established.
Keywords: The present study examined the effects of differing feedback content, focusing on adaptive feedback. Par-
Adaptive feedback ticipants learned search procedures during multiple missions in a VE. A control group received only a per-
Game-based training formance score after each mission. Two groups additionally received either detailed or general feedback
Instruction
after each mission, while two other groups received feedback that adapted based on their performance
Virtual environments
(either detailed-to-general, or general-to-detailed). Groups that received detailed feedback from the start
of training had faster performance improvement than all other groups; however, all feedback groups
showed improved performance and by the fourth mission performed at levels above the control group.
Results suggest that detailed feedback early in the training cycle is the most beneficial for the fastest
learning of new task skills in VEs.
Ó 2012 Elsevier Ltd. All rights reserved.

1. Introduction Nevertheless, there is a lack of consensus on how to deliver


feedback in a training environment. Research examining the level
Current research on game-based training (GBT) supports the no- of information provided to trainees has yielded mixed results
tion that trainees are able to transfer skills learned in a simulation regarding the impact of feedback on learning (e.g., Phye & Sanders,
or virtual environment (VE) to both similar and novel tasks in other 1994). As a result, considerable debate continues over what types
applications. These observations suggest that VEs can be readily of feedback may function best under different circumstances or
used as a means to train new skill sets, with enhanced safety and at different points towards mastery (Pea, 2004; Reiser, 2004).
fewer resources, compared to live training (Boot, Neider, & Kramer, These conflicting results suggest the possibility that the nature of
2009; Day & Goldstone, 2011; Duffy, Ng, & Ramakrishnan, 2004; feedback should adapt based on the learner’s level of mastery
Lee, Bond, Scarbrough, Gillan, & Cooke, 2007). However, research (e.g., expertise reversal effect, Kalyuga, Ayres, Chandler, & Sweller,
has demonstrated that simply playing a game does not necessarily 2003). This approach would more closely mimic human instruc-
lead to learning; instead, there is a need for proper instruction and tor–trainee interactions and allow for feedback to change in re-
guidance that informs trainees of what information is necessary for sponse to a trainee’s current aptitude.
learning processes to take place (Mayer, 2004). In fact, it is widely Generally, feedback is used to inform trainees about their cur-
accepted that feedback is necessary to ensure learning because it rent or overall performance, including telling them what they are
helps to shape the perception, cognition, or action of the learner doing correctly, what they are doing incorrectly, and/or providing
(Hays, 2005; Mayer & Johnson, 2010; Moreno, 2004). suggestions and guidance that allows trainees to make revisions
to their own performance. Shute (2008) discusses two classes of
feedback: formative and outcome. Formative feedback refers to
Abbreviations: VE, virtual environment; GBT, game-based training; CLT, cogni-
tive load theory; DG, detailed-to-general; GD, general-to-detailed; VGE, video game
feedback that provides information to the learner that is intended
experience. to ‘‘modify his or her thinking or behavior for the purpose of
⇑ Corresponding author at: University of Central Florida, Department of Psychol- improving learning’’ (p. 154). Outcome feedback, sometimes re-
ogy, P.O. Box 161390, Orlando, FL 32816-1390, USA. Tel.: +1 407 384 3900. ferred to as knowledge of results, only includes verification infor-
E-mail addresses: stephen.serge@us.army.mil, stephenserge@gmail.com mation (e.g., a score; Narciss, 2008). It is generally agreed upon
(S.R. Serge), heather.priest@us.army.mil (H.A. Priest), paula.durlach@us.army.mil
(P.J. Durlach), cheryl.i.johnson@us.army.mil (C.I. Johnson).
that formative feedback of some kind leads to better learning than
1
Tel.: +1 407 384 3981. outcome feedback alone (Shute, 2008).

0747-5632/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.chb.2012.10.007
S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158 1151

Formative feedback can vary in its level of specificity, which the interaction with the game and no clear direction as to how to
determines how much detail is presented in a feedback message perform the task correctly. On the other hand, as mastery in-
(Goodman, Wood, & Hendrickx, 2004). Formative feedback can creases, providing feedback on information the student already
range from very specific and detailed, to very general and vague knows can be distracting, and add unnecessary cognitive load.
(Davis, Carson, Ammeter, & Treadway, 2005; Shute, 2008). Detailed Information that is relevant and beneficial for a novice may result
feedback provides information that is directive in that it clearly in extraneous cognitive load for an expert (Kalyuga et al., 2003;
specifies what the trainee needs to revise (Black & Wiliam, 1998; Paas, Renkl, & Sweller, 2003). The implication is that novices may
Shute, 2008). General feedback is less directive and relies more need a lot of detailed feedback, but that level of detail may also
on trainee inference regarding revision of behavior (Black & need to be decreased as skill increases (i.e., the expertise-reversal
Wiliam, 1998). Shute (2008) concluded that detailed feedback effect, Kalyuga et al., 2003). This is a tactic used in one-on-one
may be more effective than general feedback, but also suggested tutoring. Typically the tutor adapts the amount of information pro-
that this is more of a basic guideline and may not be true in all vided based on the learner’s real-time performance (Gallien &
situations. For example, researchers have shown that highly Oomen-Early, 2008). Too much support from the tutor can induce
specific feedback can be beneficial for training while the student reliance on support; therefore a good tutor adjusts the amount of
is inexperienced on a particular task (Davis et al., 2005), but can detail provided as student mastery increases.
hinder enduring knowledge and performance on transfer tasks in
GBT (Goodman, Wood, & Chen, 2011; Goodman et al., 2004).
2.2. Adapting feedback in training
Fortunately, game-based environments offer opportunities for
feedback to adapt to the individual as they go, much more so than
The current study expanded on Billings’ (2010) experiment on
in less interactive training (e.g., written material, large classroom
adaptive feedback. In Billings’ study, participants were trained to
lectures). In fact, GBT can provide a continuous source of feedback
perform a search and identify task, first by reading about the pro-
so that trainees can track their own progress towards a goal, which
cedures of the task, then performing the task in a virtual game
is crucial since feedback improves learning through both its infor-
environment comprised of a small town with numerous buildings.
mational and motivational qualities (Bransford, Brown, & Cocking,
Participants received feedback based on their errors at the end of
1999; Salas & Cannon-Bowers, 2000). However, while performance
each mission. There were three static feedback conditions –
feedback is sometimes common in GBT and is considered an essen-
detailed, general, outcome (control) – and two adaptive-based
tial factor to learning within these environments (Ricci, Salas, &
feedback conditions. One of the adaptive groups was given
Cannon-Bowers, 1996), the relative effectiveness of different types
detailed feedback that switched to general feedback as scores
of feedback within GBT remains an open question. The goal of the
improved past a set criterion. The other adaptive group started
present experiment was to investigate how different types of feed-
with general feedback that changed to detailed feedback if perfor-
back (detailed vs. general) influence acquisition of new task proce-
mance failed to improve from the previous mission score. Results
dures in a game-based training environment. We hypothesized
indicated that the detailed and both adaptive conditions performed
that the appropriate level of specificity might depend on the exper-
better than the control group and showed performance improve-
tise level of the trainee. Therefore, besides examining the effect of
ments over time. Additionally, the detailed-to-general condition
detailed vs. general feedback, we also investigated the effect of
was able to reach a higher level of performance at a quicker rate
adapting the feedback content based on the trainee’s level of per-
than the general-to-detailed group, indicating that highly detailed
formance on the preceding task mission.
feedback early in the learning process may lead to faster learning.

2. Theoretical background 2.3. The present experiment

2.1. Cognitive load The current effort included similar conditions but expanded on
the Billings study by implementing more refined and stringent cri-
Games and VEs are attractive for training because they can rep- teria for adaptation within the adaptive feedback conditions. Sim-
licate artifacts and situations important to schema development, ilarly, the feedback given to participants consisted of either static
but they also have the potential to create additional levels of cog- or adaptive feedback. The static feedback remains consistent in le-
nitive load (compared to live training). Cognitive load theory (CLT) vel of detail throughout training: (1) ‘‘Detailed’’ static feedback is
is largely based on the assumption of a highly limited working specific information provided to the trainees regarding what tasks
memory and vast long-term memory where schemas are stored they are performing incorrectly, and (2) ‘‘General’’ static feedback
and recalled when needed. CLT may help explain why detailed is vague information that only states the learning area(s) in which
feedback may be more effective than general feedback in some sit- the trainees are committing errors. These descriptions are consis-
uations but not others. According to CLT, there are different types tent with previous research on feedback specificity effects on
of cognitive load that are derived from the material-to-be-learned learning (Black & Wiliam, 1998; Shute, 2008). Participants in the
(intrinsic), the training environment (extraneous), and the trainee. static general condition in this study were given the option to re-
The information being learned has an intrinsic cognitive load (i.e., view the training manual after each experimental session. This
the level each item or procedure can be learned individually and change is described in more detail below.
independently of each other; Sweller, 1994), which can determine The present experiment also examined methods of adapting
how difficult it is to learn and perform. Learning how to play the feedback based on different initial levels of specificity. Detailed-
game is a task on its own and may represent an increase in extra- to-general (DG) adaptive feedback provided highly detailed feed-
neous cognitive load, which can lead to a less optimal learning back on any errors that were made; then, as trainee competency
environment (Rey & Buchwald, 2011; Sweller, 1994). increased or remained above the performance criteria, the type
Feedback relevant to the task being trained may help guide of feedback switched to general. The rationale for providing DG
learning and schema development for the overall task, potentially feedback is based on scaffolding and CLT, which supports the no-
lowering the cognitive load of the task while learning is taking tion that novices benefit most from initial detailed feedback, lead-
place. Without specific guidance or instruction during new task ing them step by step through a process, which results in better
training, trainees may find the training overwhelming due to both schemata creation (Clark, Nguyen, & Sweller, 2006).
1152 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158

General-to-detailed (GD) adaptive feedback first provided gen- 3.3. Materials


eral feedback to the trainee on any errors that were made; if scores
did not improve during the subsequent mission, detailed feedback 3.3.1. Apparatus
on the specific errors in performance was provided for the remain- Participants interacted with two desktop computers. One com-
ing training exercises until performance rose above the predeter- puter had the Game Distributed Interactive Simulation (GDIS) vir-
mined criteria. This adaptive feedback condition was included tual environment, which is a first-person shooter video game that
because it tells us more about the importance of how feedback is was developed from a modified version of the retail video game
adapted. For example, if both DG and GD lead to better perfor- Half-Life 2Ò. The GDIS environment consisted of a small town with
mance, then perhaps just changing the feedback leads to better approximately 18 buildings and two main roadways that inter-
learning. sected at the North side of the town. Participants were asked to
Given what is known about the differences between feedback learn how to perform the procedural tasks for conducting a proper
styles and the research conducted on the differing levels of feed- search and identification mission. A second computer was used to
back content, we made the following predictions about receiving send and receive text messages from headquarters (HQ) and to dis-
feedback during GBT. play post-mission feedback to the participants.

3.3.2. In-game performance monitoring


2.3.1. Hypothesis 1
A scoring protocol was created by the experimenters prior to
Receiving detailed feedback earlier in training will lead to better
data collection to calculate a percent-correct score for training
performance scores than those receiving only general or outcome
within the GBT. Each occurrence at which a participant could make
feedback. Therefore, we predict that (H1) DG feedback will result
a choice in behaviors associated with the experiment was assessed
in increases in performance above the GD group and all other feed-
and documented by two of the researchers. These occurrences
back groups, lending support to CLT in adaptive feedback. Addi-
were compared and final scoring protocol was established so that
tionally, (H1.1) any training group receiving detailed feedback at
performance scores would be similar between scorers. All sessions
some time during training sessions will perform better more
were monitored at an observer station where a researcher re-
quickly than those receiving general or outcome feedback (i.e.,
corded whether or not participants properly followed the search
those not receiving detailed feedback of any kind).
and identify task procedures at the appropriate opportunities. A
percent-correct score was calculated based on the number of
2.3.2. Hypothesis 2 opportunities to perform a correct action, correctly or incorrectly
Since general feedback did not provide any information on spe- performing those actions, and whether or not the entire mission
cific errors participants made during training in Billings (2010) ori- was completed in the time allotted. This score was pushed to the
ginal study, we predicted that participants were simply forgetting participant’s texting computer by the experimenter at the appro-
what they had read in the training manual during the initial train- priate time after the mission was completed. The calculated score
ing phase of the experiment. Participants in the general feedback also determined the type of feedback that participants received for
conditions were offered the option of reviewing the training man- the next mission if they were in one of the adaptive conditions.
ual in between GBT sessions, immediately after receiving general
feedback. We propose that (H2) participants in the general condi- 3.3.3. Training manual
tion that chose to look at the manual would perform better over The training manual consisted of 16 instructional slides, two per
time than those who did not. page, and one title page. The manual contained information about
the premise of the participant’s role in the experiment and detailed
information regarding the proper procedures to follow for each
3. Method
search and identify mission. All of the pages were bound within
a standard 1-in binder.
3.1. Participants
3.3.4. Video game experience (VGE)
The experiment was completed in its entirety by 66 men and 38
Research suggests that VGE is related to better performance in
women. The mean age was 22.91 years. Participants were recruited
virtual environments (Richardson, Powers, & Bousquet, 2011).
from a large university in the south-eastern U.S. and the surround-
Therefore, VGE scores were derived from four individual demo-
ing community. Recruitment was accomplished through wide-area
graphic questions, rated from 1 to 5 (e.g., 1 – no experience or
announcements and advertisements on web-based recruitment
exposure; 5 – high experience or exposure), that consisted of
forums. Participants were either paid $10 per hour or received
self-reports of how experienced one was with video games, how
course credit for participation. All completed general demographic
often one played generally, how confident one was with video
questions, including questions about video game experience (VGE).
games, and how often one played first-person shooter games, spe-
cifically. A majority (75%) owned video game systems, and 66% of
3.2. Design participants reported playing first person shooter games at least
once a month. Overall, participants reported moderate VGE
A two-factor (feedback by mission), mixed design was used in (M = 2.59 out of a possible 5, SD = .97).
the main experiment. Feedback specificity was manipulated be-
tween-participants on five levels: Static-Detailed feedback, Sta- 3.3.5. Cognitive load and workload
tic-General feedback, Adaptive Detailed-to-General (DG) Cognitive load was measured using the single-item, subjective
feedback, Adaptive General-to-Detailed (GD) feedback, and out- response Cognitive Load Questionnaire (CLQ; Paas, 1992). The
come-only feedback (Control). Mission was manipulated within- CLQ asks participants to rate their individual level of mental effort
subjects, and there were four levels. on a 9-point Likert scale, ranging from ‘‘very, very low mental ef-
Participants were randomly assigned to one of the five treat- fort’’ (1) to ‘‘very, very high mental effort’’ (9).
ment groups and performance was measured by assessing how Workload was measured after each individual mission by re-
well each participant learned and executed the task procedures sponses on the paper version of Hart and Staveland (1988)
throughout each of four training sessions. NASA-TLX form. Items included mental demand, physical demand,
S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158 1153

temporal demand, perceived performance, effort level, and frustra- feedback, participants were asked to close the feedback dialogue
tion. Individuals made a tic-mark along a scale with 21 gradients. box and completed the NASA TLX and CLQ. Participants in the gen-
Individual items and average scores were used to determine levels eral condition were also given the option of looking back at the
of workload. training material after receiving their feedback. The researcher
sent a text message asking them if they wanted to look at the man-
3.3.6. Knowledge pre- and post-tests ual again. They could choose to look or not look and their answers
Participants were given pre-test (baseline) and post-test knowl- were logged in the system and noted by researchers. After this pro-
edge quizzes to assess comprehension of the training materials cess was completed, all participants were given the briefing for the
used in the study. Ten multiple choice questions asked participants next mission. This was repeated for each of the sessions. When the
about the search and identify task procedure. The pre-test was gi- final mission was finished, participants were asked to complete a
ven prior to participants receiving any information about the task. final post-knowledge test of the search and identify procedures.
The post-test was given after all missions were concluded. No performance feedback was given after the last mission.

3.4. Procedures 4. Results

Participants were first asked to give consent for participation Preliminary analyses were conducted to determine if significant
and given an opportunity to ask questions regarding the experi- differences existed on any of the pre-test or demographic measures
ment. Then they filled out the demographic, VGE, and the pre-test that may have affected mission performance between groups. No
questionnaires. significant differences were found between conditions for any of
Participants were then trained on the operation of the comput- the demographic or video game variables. Mission 1 performance
ers that they would use for the experiment. They were then given scores were not significantly different between feedback condi-
up to 15 min to read and learn the search and identify task proce- tions. This was expected as no group had received any type of feed-
dures. The procedures were organized into three specific terminal back intervention until after Mission 1. Based on Billings’ (2010)
learning objectives, which consisted of: entering and exiting build- findings, the measure of VGE was an expected covariate for the
ings, proper methods for searching buildings, and communication analysis. However, it failed to meet all the required assumptions
procedures. Participants were allowed to ask the experimenter of covariates and was left out of the main analyses.
for clarification of a specific rule or procedure during the 15-min
period. However, questions were not allowed once the first mission
4.1. Manipulation checks
began.
Once finished reading the training manual, and immediately
Scores were examined to determine which adaptation occurred
prior to beginning each of the subsequent missions, participants
in the two adaptive conditions (e.g., the type of feedback those par-
were provided with a one-page mission briefing that presented
ticipants received went from DG or GD based on condition). Every
them with the premise and reason for the search and identify mis-
participant in the two adaptive conditions had a switch in feedback
sion. The briefing stated that they were to assume the role of a
type, although the mission on which the switch occurred varied,
Forensics Officer who was given the task of searching a particular
according to their performance. Thus, the adaptive conditions did
selection of buildings in the town for a missing Alzheimer’s patient.
adapt and were in fact different from the static conditions (e.g., De-
The briefing also covered which buildings they were responsible
tailed and General).
for searching and presented them with a description and photo-
graph of both the Alzheimer’s patient and any additional target
items that needed to be reported. A maximum of 10 min was allot- 4.2. Comparison of workload
ted for each mission.
Upon completion of an individual mission or time expiring, No significant differences existed on TLX or CLQ scores between
whichever occurred first, participants first completed the any of the five feedback conditions for any mission.
NASA-TLX short form. Then the experimenter delivered feedback
electronically via the text message computer. The content of the 4.3. Comparison of feedback conditions on mission performance
feedback was determined by the condition to which the participant
was assigned, as well as the performance of the participant (see Table 2 shows the mean performance scores for each condition
Table 1 for examples). Every condition included a percentage- over all missions. A mixed between-within ANOVA was conducted
correct score in the feedback dialogue boxes. After reading the to assess the performance differences between each of the five

Table 1
Description of experimental conditions and feedback content.

Condition Feedback description Example


Detailed Feedback is always of high specificity ‘‘Before entering a building, remember to walk around it to make sure it
is not already tagged’’
General Feedback is always of low specificity ‘‘Remember to follow the correct procedures for communicating with
HQ’’
Adaptive General-Detailed Initial feedback is of low specificity, but increases in Initial: ‘‘Remember to apply the correct procedures for searching
specificity if performance criterion are not reached buildings!’’
Subsequent: ‘‘If a building has multiple floors or multiple sections, you
should text HQ when a floor or section is cleared!’’a
Adaptive Detailed-General Initial feedback is of high specificity, but decreases in Initial: ‘‘Remember to include both the building number and the name of
specificity if performance criterion are met the item found when reporting to HQ’’a
Subsequent: ‘‘Remember to follow the rules for Learning Objective 1’’
Control Does not receive any formative feedback throughout the None
experiment
a
If scores match criterion for feedback adaptation.
1154 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158

Table 2
Means and standard deviations for performance scores for all conditions over all missions.

Condition Mission 1 Mission 2 Mission 3 Mission 4


M (SD) M (SD) M (SD) M (SD)
Detailed 76.58 (12.02) 87.53 (8.30)a 92.68 (6.18) 90.84 (5.91)b
General 80.68 (10.26) 81.32 (13.35) 85.00 (10.26) 86.64 (10.20)
Adaptive DG 78.09 (12.16) 84.75 (10.03) 91.91 (5.42)a,b 89.83 (7.28)b
Adaptive GD 80.14 (7.93) 75.90 (8.51) 88.86 (8.18)a,b 90.48 (11.63)b,c
Control 75.83 (15.38) 73.17 (13.76) 76.17 (10.16) 76.78 (9.70)

Note: Superscripts indicate within group (feedback condition) significance of mission performance.
a
Sig. different at p < .05 than the mission prior.
b
Sig. different at p < .05 of Mission 1.
c
Sig. different at p < .05 different than Missions 1 and 2.

feedback conditions over each mission. The Greenhouse–Geisser


correction for degrees of freedom was used when needed due to
tests of sphericity being violated, as noted with an asterisk in text.
Results revealed significant main effects of both mission
(F(2.52, 246.84) = 36.94, p < .001) and condition (F(4, 98) = 6.60,
p < .001), as well as a significant mission by condition interaction
(F(10.08, 246.84) = 4.62, p < .001). In general, performance in-
creased across missions, but the level of improvement varied by
condition. These differences are presented below.

4.3.1. Mission performance within conditions


Post hoc tests using a Bonferroni correction on the mission level
indicated that performance improved over missions for the static
Fig. 1. Feedback group performance over all missions. Mean score values for
detailed, DG and GD conditions, but failed to do so for the static
missions representing the progression of each feedback groups’ performance
general and control conditions (see Table 2). Compared to their through the experimental sessions. While groups that received highly detailed
Mission 1 performance, the Detailed condition performed signifi- feedback performed better at a quicker pace, other feedback groups were equally
cantly better on Mission 2 (p < .002), Mission 3 (p < .001), and Mis- able to improve their performance by the final mission, aside from the control.
sion 4 (p < .001), but scores between Missions 2 and 3, were not Standard errors are represented by the error bars.

significantly different (p = .144 and p = 1.00, respectively).


The DG condition showed significant improvement in mission
comparisons were significantly different from one another for
performance on Mission 3. Mission 3 performance was signifi-
Mission 3.
cantly better than both Mission 1 (p < .001) and Mission 2
Mission 4 comparison results also revealed significant differ-
(p = .004). Mission 4 performance was also higher than Mission 1
ences between conditions (F(4, 98) = 9.10, p < .001). All conditions
(p < .001), but not significantly different than Missions 2 or 3
performed significantly better than the control. No other
(p = .086 and p = 1.00, respectively).
comparisons between conditions on Mission 4 performance were
Similarly, the adaptive GD group did not show significant
significantly different from one another.
improvement on performance until Mission 3. Mission 3 perfor-
The interaction effects are seen in the further planned
mance scores were significantly higher than both Mission 1
comparisons between the DG and GD groups. While both adaptive
(p = .002) and Mission 2 (p < .001). There was no difference in
feedback groups performed statistically the same on Missions 1
scores between Missions 3 and 4 (p = 1.00), however, Mission 4
(p = .515), 3 (p = .149), and 4 (p = .75), the GD group score showed
scores were significantly higher than both Missions 1 (p < .001)
an initial drop in performance and was significantly lower than the
and 2 (p < .001). Neither the general (F(3, 96) = 3.55, p > .05) nor
DG group on Mission 2 (F(1, 42) = 9.83, p = .003). The GD feedback
the control (F(3, 96) = 0.83, p > .05) conditions had any significant
group’s score actually dropped on Mission 2, although this differ-
changes in performance across missions.
ence was not a significant change from GD group’s performance
on Mission 1. People in the GD condition performed worse on Mis-
4.3.2. Mission performance between conditions sion 2 than people in the DG condition, who improved their scores
Fig. 1 presents a graph of the simple main effects over all mis- on Mission 2.
sions. Post hoc pairwise comparisons were conducted with Bonfer- These findings offer partial support for Hypothesis 1. Specifi-
roni correction. Scores between feedback groups on Mission 2 were cally, the adaptive DG feedback group did improve faster than
significantly different from one another (F(4, 98) = 5.73, p < .001) the general and control conditions; however, the DG and static de-
with the Detailed condition performing significantly better than tailed conditions increased at a similar rate. The GD group per-
the GD (p = .012) and control (p = .001) conditions. The DG condi- formed worse than the DG group initially, but was able to match
tion also performed significantly better than the control DG performance by Mission 3. Furthermore, while the DG feedback
(p = .012). No other comparisons were significantly different from condition did result in significantly better mission performance
one another for Mission 2. over the General and control conditions (i.e., conditions not receiv-
There were also significant differences between conditions on ing any type of detailed feedback), it did not show significant
mission performance in Mission 3 (F(4, 98) = 12.67, p < .001). The improvement over the GD and Static Detailed feedback conditions
Detailed condition performed significantly higher than both the (i.e., conditions that received some type of detailed feedback).
General (p = .036) and control (p < .001) conditions. The general, These results also partially support Hypothesis 1.1 in that the DG
DG, and GD groups all performed significantly higher than the con- feedback condition was significantly higher than the GD group
trol group (p = .011, p < .001, p < .001, respectively). No other on Mission 2 but not on any other mission. This suggests that
S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158 1155

initially receiving detailed feedback may lead to faster learning of that differences existed on some VGE and mental demand work-
the training material. load scores between the manual and no-manual groups. First, peo-
ple choosing to review the manual (M = .833, SD = .892) had
4.3.3. Pre–post tests significantly higher ratings on VGE than those that did not
Pre and posttest means between groups were examined, and (M = .019, SD = .793; t(20) = 2.395, p = .027). It also appeared that
comparisons were made based on condition using the nonpara- participants choosing to review the manual (M = 6.29, SD = 3.25)
metric Kruskal–Wallis Test. As expected, pretest scores were not had significantly lower ratings on the mental demand scale of
significantly different between each group. Posttest scores the NASA TLX than those that did not (M = 4.18, SD = 1.46;
(M = 8.65, SD = 1.23) showed a significant increase from pretest t(20) = 2.87, p = .010). Implications of this are discussed later.
(M = 2.01, SD = 1.15) scores (t(103) = 41.56, p < .001). Table 3 These results support Hypothesis 2 in that looking at the man-
shows results comparing scores between conditions for posttest ual helped those in the general feedback condition improve their
scores. The Kruskal–Wallis test was significant (v2 (4, performance over time.
N = 104) = 13.89, p = .008). However, individual comparisons using
Mann–Whitney U and Wilcox tests indicated only one significant
difference between all groups; the Detailed feedback group scored 4.4.2. Exploratory comparison to other feedback groups
significantly higher than the control (z = 3.498, p < .001). After completing the initial analyses, an additional analysis was
used to examine how reviewing the manual in the general feed-
4.4. Training manual effects for the general feedback condition back group compared to all other groups (Fig. 3). Results showed
that, over all missions, the manual group did not differ significantly
Participants in the general condition had the option of review- from the detailed feedback group (p = .409). The manual group did,
ing the training manual after each of the first three missions. This however, show significant improvement over both the control and
was done to examine whether people in this condition were simply the no-manual group (p < .001, p = .028, respectively) by Mission 3
forgetting the information from the training manual and the gen- and maintained this trend vs. the control group in Mission 4
eral level feedback may not have been enough to elicit recall. Fifty (p < .001).
percent of the participants in this condition never opted to review Likewise, results also showed that, over all missions, the no-
the manual again, and 18% opted to do so once. The rest consulted manual group did not differ significantly from the control group
the manual two or three times. For purposes of analysis, the gen- (p = .522). The no-manual group’s performance scores were signif-
eral feedback group was split into two subgroups. People that con- icantly lower than every other condition’s scores, aside from the
sulted the manual two or three times were assigned to a General control, on Mission 3. However, scores on every other mission,
Manual group (N = 7) and the rest were assigned to a General including Mission 4, were not statistically different from every
No-Manual group (N = 15). A repeated-measures ANOVA was con- other feedback group.
ducted to first analyze the differences between the manual group These results shed some interesting light on how effective ac-
(manual) and the no-manual group (no-manual). There was a sig- cess to the manual can be when trainees are only given a vague le-
nificant main effect of mission (F(2.08, 41.65) = 5.78, p = .002) and a vel of performance feedback. Participants who looked at the
significant mission by condition interaction (F(2.08, 41.65) = 4.23, manual did not differ significantly from any other condition that
p = .020). Table 4 presents descriptive statistics for each new group received detailed feedback (i.e., Detailed, adaptive DG, adaptive
and their mission performance. GD). Referring to the manual enabled trainees to increase their per-
Further examination of the effects revealed that the entire mis- formance level to a comparable level as with the detailed feedback
sion effect was a result of the manual group’s mission performance and was significantly higher than the control. The no-manual
over time (F(3, 18) = 11.224, p < .001; see Fig. 2). The no-manual group scores were consistently similar to those of the control
group did not significantly change their performance over the four group over all missions. However, it is important to note that the
missions (p = .397). Examining post hoc tests from the ANOVA, no-manual group scores were only significantly lower than other
using Bonferroni correction, revealed that the manual group Mis- feedback groups on Mission 3. No-manual group scores, while low-
sions 3 (p < .001) and 4 (p = .001) performance scores were signif- er, were not significantly different from the other feedback group
icantly better than Mission 1 performance scores. Mission 2 scores scores on Missions 1, 2, and 4.
were also higher than Mission 1 scores but this difference was not
significant. Examining the differences between groups on missions
using t-tests revealed that the manual group outperformed the no- 5. Discussion
manual group by Missions 3 and 4 (ts > 2.69; Bonferroni corrected
ps < .05). The no-manual group’s performance remained consis- This study sought to investigate how different styles of feed-
tently low throughout the four missions while the manual group’s back, particularly adaptive feedback, within a GBT system affect
performance increased. a trainee’s performance within that environment. Ultimately, the
goal of this experiment was to provide answers to three broad
4.4.1. CLQ, workload, and VGE questions regarding feedback in GBT. First, is adapting feedback
Since reviewing the manual was a choice for people within this based on an individual’s performance better than providing static
condition, and only some chose to do so, further analyses revealed feedback in GBT? We predicted, based on CLT and previous re-
search, that DG feedback would result in better learning outcomes
and performance than all other feedback styles. Second, does
Table 3 receiving detailed feedback at some point in GBT lead to better
Means and Mann–Whitney U tests between each feedback condition and the control
on posttest scores.
training? In other words, we predicted that participants who re-
ceived detailed feedback at some point would perform better than
Condition N Mann–Whitney U Wilcoxon W z p if they did not. Third, why is detailed feedback typically more
Detailed vs. Control 37 61.500 232.500 3.498 .001 effective than general feedback? Participants in the general condi-
General vs. Control 40 172.000 343.000 .741 .459 tion were given the option of refreshing their memory to see if
DG vs. Control 41 139.500 310.500 1.836 .066
GD vs. Control 40 146.000 317.000 1.480 .139
forgetting could possibly explain why general feedback has been
found to be less effective.
1156 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158

Table 4
Means and SDs for the manual and no-manual groups created from the general feedback condition.

Condition N Mission 1 Mission 2 Mission 3 Mission 4


M (SD) M (SD) M (SD) M (SD)
Manual 7 79.00 (7.767) 87.43 (12.109) 92.86 (7.904) 94.14 (3.625)
No-manual 15 81.47 (11.401) 78.47 (13.303) 81.33 (9.271) 83.13 (10.439)

There are a number of possible explanations for the divergence


of scores on Mission 2. Immediately following Mission 1, the DG
group was provided with detailed feedback and the GD group gi-
ven general feedback because initial feedback was independent
of individual performance. Performance became the criterion for
adapting feedback for the DG and GD groups following Missions
2 and 3. This means that GD group received general feedback after
Mission 1, however if they performed poorly on Mission 2, they re-
ceived detailed feedback after completing Mission 2. If their scores
on Mission 3 were higher than Mission 2, they received general
feedback; if scores on Mission 3 were lower than Mission 2, they
continued to received detailed feedback.
Research on detailed feedback supports our finding that the De-
tailed and DG groups’ performance improved significantly after
their initial experimental session, because they were provided with
Fig. 2. Manual effects within the general feedback group. Direct comparison of the
specific details about the errors they made within that mission.
manual and no-manual feedback conditions. Reviewing the manual was related to
consistent improvement in performance scores over missions. The no-manual This may have acted as an immediate ‘‘refresher’’ of sorts for the
group did not show any significant improvement over time. training material that they might have forgotten. On the other
hand, the GD group was not provided with specific details of their
errors, but rather they were only given information regarding the
learning objectives on which they were underperforming, but
overall GD performance improved after receiving detailed feedback
following Mission 2 and possibly Mission 3, depending on their
performance. While we found no clear benefit to adaptive feedback
over four relatively brief training missions, we still predict that
adaptive feedback may be beneficial in certain circumstances, over
longer trials, or with regards to transfer. However, it is still unclear
as to which type of adaptive feedback is better for training in GBT
systems over longer periods of time and what impact it may have
on learning and retention.
Hands-on practice in a virtual environment, with little or no
feedback, appears to lead to lower performance. This performance
Fig. 3. Expanded feedback group performance with new general feedback classi- decrement could indicate that forgetting is taking place, that too
fications. This figure includes the results from the new general feedback conditions much information is being presented at once (i.e., high number
(i.e., manual and no-manual groups). Here, we can see how the new group scores of procedures in the training manual), or that some combination
compare to the other feedback groups, as well as the control. The manual group
of both is resulting in greater-than-optimal mental workload,
tended to perform very similar to that of both the detailed and DG groups while the
no-manual group’s performance was very similar to the GD group for Mission 2 and which has been shown to lead to decrements in performance and
the control over the all missions. learning in these situations for certain learner types (Davis et al.,
2005). It is also possible that initial training of procedures, through
a training manual or book, is simply not enough for learning to take
Our first research question examined whether it was more place during the early stages of training due to the high levels of
effective to provide adaptive or static feedback. Contrary to our effort often required (Eiriksdottir & Catrambone, 2011). If this is in-
predictions, while the DG feedback group maintained a statistically deed the case, then perhaps it is unnecessary to provide a large
higher score than the control over all post-intervention missions, amount of procedural information before training in game-based
this group did not outperform the other feedback groups through- systems begins.
out training. In fact, the DG group performed very similarly to that This leads us to the second question: does receiving detailed
of the Detailed feedback condition throughout training and only feedback at some point in GBT lead to better training? We pre-
performed better than the other adaptive condition (GD) on Mis- dicted that any group receiving detailed feedback would perform
sion 2. The immediate benefits of DG and detailed feedback were better than those that did not. This was only partially supported.
apparent, but GD feedback also proved equally suitable for higher When comparing all feedback groups, participants that were di-
performance over the entire experimental session. Specifically, rectly exposed to some level of detailed feedback did, at some
after the initial feedback intervention, scores between the GD point, significantly improve their performance scores as missions
group and both the DG and detailed feedback groups diverged. progressed. This prediction falls short in that, while the DG, GD,
Individuals in the GD condition showed an initial decrease in score and Detailed feedback conditions performed significantly better
from Missions 1 to 2, which is opposite of the performance changes than the control condition, the General feedback condition did
for the Detailed and DG groups. However, they were able to im- eventually get better so that by Mission 4 they were not
prove their scores dramatically on Mission 3, while leveling out significantly worse than the detailed conditions. In essence, results
performance on Mission 4. relevant to the first two questions indicated that some level of
S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158 1157

detailed feedback, adaptive or self-directed, is beneficial during the extraneous cognitive load, leaving fewer resources available for
training process and detailed feedback up front leads to immediate learning the material, resulting in poor task performance.
improvement. However, it appears that those receiving detailed- The results of the present research are significant in a number of
level feedback at any point during the gaming session (i.e., the De- ways. First, it presents some evidence that general feedback alone
tailed, DG, GD groups) were eventually able to perform comparably may not be enough to elicit recall of recently trained task proce-
to each other. dures without some level of prior task knowledge or some other
In terms of the third question, we predicted that trainees were type of intervention, such as access to the training manual (Smits,
simply forgetting what they had read in the manual, therefore Boon, Sluijsmans, & van Gog, 2008). These findings may also pro-
leading to poorer performance from the General feedback group. vide some explanation as to why general feedback has been found
In other words, participants who received detailed feedback were to be less effective than detailed feedback, because trainees may
reminded what they forgot, or failed to learn, and improved their simply have forgotten the information and providing general feed-
performance. On the other hand, trainees who received general back is not enough to prompt them. These results indicate that pro-
feedback learned that they were underperforming on some objec- viding any additional detail about errors, including the ability to
tive, but never learned the exact procedures they were forgetting, review the manual, leads to better performance and promotes re-
so they never improved. To explore this and expand on Billings’ call of poorly learned or forgotten information.
(2010) original findings, the general feedback group was divided
into those that reviewed the manual, which was an addition to Bill-
5.1. Study limitations and future research
ings’ original general feedback condition, vs. those that did not,
which mimicked Billings’ original manipulation. Reviewing the
While these results present some significant steps forward in
manual helped trainees improve their scores and perform similarly
the investigation of feedback in GBT, there are also some limita-
to those receiving detailed or adaptive feedback. The no-manual
tions that should be addressed. The lack of significant differences
group tended to have much lower scores, similar to those of the
in cognitive load between the conditions could be reflective of both
control. In this sense, the no-manual group acted as a general-feed-
the task and measures used. While feedback was shown to gener-
back-only condition while the manual group acted as a self-inter-
ally improve performance for most feedback groups at some point
vention group. These findings lend support to the notion that
throughout the training, the task itself may have not been difficult
general feedback, alone, is ineffective. General feedback does not
enough to elicit higher levels of cognitive load, which was sug-
seem to provide support for learning. If people choose to access
gested to lead to differences in performance. Furthermore the use
the training material during training sessions, even without being
of the NASA-TLX may have been inadequate for determining the
told their specific errors, it is possible for them to correct errors
workload of users which would support predictions based on
and improve their performance. These findings also support the
CLT. Future research should utilize tasks of differing difficulty lev-
idea that trainees may not be very good regulators of their own
els and a more effective method of assessing workload to deter-
learning (e.g., Bjork, 1999; Kornell & Bjork, 2007). Some may have
mine if improvements in performance were due to performance
simply been unaware how beneficial looking at the manual was to
feedback, rather than exposure to the game itself.
improving performance.
This experiment examined how well a person learned to per-
A number of factors may contribute to the differences observed
form a task within a virtual, GBT environment given different feed-
between these groups. First, there were some interesting findings
back interventions. Despite finding significant differences in
in the comparison of results for the split general feedback groups.
performance between conditions, there was no direct measure-
The group choosing to review the manual was able to perform sim-
ment of transfer or knowledge retention after training. Opportuni-
ilarly to the detailed and both adaptive feedback groups. Those
ties for actual transfer tasks tend to be rare, but when possible,
choosing not to review the manual performed more similarly to
future research should include these tasks to accurately measure
the control. However, the no-manual scores were only lower than
the effectiveness of the training with different feedback styles. In
the detailed or adaptive condition scores on one mission (Mission
addition, individual difference factors, such as gaming experience
3). Despite this, no-manual group scores were consistently lower
or learning type, and their impacts on learning within GBTs may
and did not improve over time, opposite that of the other feedback
also be a worthwhile research topic.
groups. Earlier predictions stated that those receiving some level of
One challenge of conducting research on adaptive training is
detailed feedback would perform better than others receiving gen-
determining how to adjust the performance criteria used to deter-
eral or no feedback. In this case, the manual may have acted as a
mine feedback content for the adaptive conditions. While the per-
source of detailed feedback, especially considering that it con-
formance criteria used in this experiment was more carefully
tained all of the procedural information needed for the task.
calibrated than those used in Billings’ (2010) experiment, it is pos-
A second explanation may be found in VGE scores between the
sible that these criteria were still not sensitive enough. Further re-
manual groups. VGE was found to account for differences between
search might aid in determining optimal criterion levels; however,
individuals who chose to review the manual and those who did
the feedback criterion set for a particular task are likely different
not. It is possible that those who reported more video game expe-
for another task. More research is needed to help explain where
rience chose to review the manual because they were not as con-
and how criterion levels should be set and for each individual task.
cerned with the actual game-play and controls used in the GBT
system. The unfamiliarity with the mouse and keyboard combina-
tion for controlling the avatar in the game may have been over- 6. Conclusions
whelming to people with low gaming experience. Those with less
gaming experience may have been more focused on learning The goal of this paper was to explore the specificity of feedback
how to play the game rather than learning the proper task proce- messages for GBT systems, with a particular focus on the potential
dures within the training game, which raises some concerns benefits of performance-based adaptive feedback. While this study
regarding the deployment of GBT systems (see Adams, Mayer, demonstrated that providing adaptive feedback can be helpful,
MacNamara, Koenig, & Wainess, 2012). This finding is also consis- providing static detailed feedback was just as effective. Certainly,
tent with Sweller’s cognitive load theory and may help explain more research is needed over a broader spectrum of GBT tasks in
why general feedback is not as effective as detailed. Namely, for order to provide a clearer picture of how adaptive feedback relates
less experienced video game players, the game itself presents to learning over immediate and long-term applications. In terms of
1158 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158

immediate performance improvements, providing highly detailed Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect.
Educational Psychologist, 38(1), 23–31. http://dx.doi.org/10.1207/
feedback (whether adaptive or static) appears to speed up acquisi-
S15326985EP3801_4.
tion of procedural steps, leading to less dependence on detailed Kornell, N., & Bjork, B. A. (2007). The promise and perils of self-regulated study.
feedback later in training. Detailed feedback was the most effective Psychonomic Bulletin & Review, 14, 219–224.
way to train individuals under these circumstances. Lee, A. Y., Bond, G. D., Scarbrough, P. S., Gillan, D. J., & Cooke, N. J. (2007). Team
training and transfer in differing contexts. Cognitive Technology, 12(2), 17–29.
Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery
References learning? American Psychologist, 59(1), 14–19. http://dx.doi.org/10.1037/0003-
066X.59.1.14.
Adams, D. M., Mayer, R. E., MacNamara, A., Koenig, A., & Wainess, R. (2012). Mayer, R. E., & Johnson, C. I. (2010). Adding instructional features that promote
Narrative games for learning: Testing the discovery and narrative hypotheses. learning in a game-like environment. Journal of Educational Computing Research,
Journal of Educational Psychology, 104(1), 235–249. http://dx.doi.org/10.1037/ 42(3), 241–265. http://dx.doi.org/10.2190/EC.42.3.a.
a0025595. Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of
Billings, D. R. (2010). Adaptive feedback in simulation-based training (Unpublished explanatory versus corrective feedback in discovery-based multimedia.
doctoral dissertation). University of Central Florida, Orlando, FL, USA. Instructional Science, 32(1–2), 99–113. http://dx.doi.org/10.1023/
Bjork, R. A. (1999). Assessing our own competence: Heuristics and illusions. In D. B:TRUC.0000021811.66966.1d.
Gopher & A. Koriat (Eds.). Attention and performance XVII: Cognitive regulation of Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector,
performance. Interaction theory and application. Cambridge, MA: MIT Press, pp. M. D. Merrill, J. J. G. van Merriënboer, & M. Driscoll (Eds.), Handbook of research
435-459. on educational communications and technology (3rd ed., pp. 125–144). New York:
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Lawrence Erlbaum Associates.
Education: Principles, Policy & Practice, 5, 7–74. Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem-solving
Boot, W. R., Neider, M. B., & Kramer, A. F. (2009). Training and transfer of training in skills in statistics: A cognitive-load approach. Journal of Educational Psychology,
the search for camouflaged targets. Attention, Perception, & Psychophysics, 71(4), 84(4), 429–434.
950–963. http://dx.doi.org/10.3758/APP.71.4.950. Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional
Bransford, J., Brown, A. L., & Cocking, R. R. (Eds.). (1999). How people learn: Brain, design: Recent developments. Educational Psychologist, 38(1), 1–4. http://
mind, experience, and school. Washington, DC: National Academy Press for dx.doi.org/10.1207/S15326985EP3801_1.
National Research Council. Pea, R. D. (2004). The social and technological dimensions of scaffolding and related
Clark, R. C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based theoretical concepts for learning, education, and human activity. Journal of the
guidelines to manage cognitive load. San Francisco, CA: Pfieffer, A Wiley Imprint. Learning Sciences, 13(3), 423–451. http://dx.doi.org/10.1207/
Davis, W. D., Carson, C. M., Ammeter, A. P., & Treadway, D. C. (2005). The interactive s15327809jls1303_6.
effects of goal orientation and feedback specificity on task performance. Human Phye, G. D., & Sanders, C. E. (1994). Advice and feedback: Elements of practice for
Performance, 18(4), 409–426. http://dx.doi.org/10.1207/s15327043hup1804_7. problem solving. Contemporary Educational Psychology, 19(3), 286–301. http://
Day, S. B., & Goldstone, R. L. (2011). Analogical transfer from a simulated physical dx.doi.org/10.1006/ceps.1994.1022.
system. Journal of Experimental Psychology: Learning, Memory, and Cognition, Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring
37(3), 551–567. http://dx.doi.org/10.1037/a0022333. and problematizing student work. Journal of the Learning Sciences, 13(3),
Duffy, V. C., Ng, P. P. W., & Ramakrishnan, A. (2004). Impact of a simulated accident 273–304. http://dx.doi.org/10.1207/s15327809jls1303_2.
in virtual training on decision-making performance. International Journal of Rey, G., & Buchwald, F. (2011). The expertise reversal effect: Cognitive load and
Industrial Ergonomics, 34(4), 335–348. http://dx.doi.org/10.1016/ motivational explanations. Journal of Experimental Psychology: Applied, 17(1),
j.ergon.2004.04.012. 33–48. http://dx.doi.org/10.1037/a0022243.
Eiriksdottir, E., & Catrambone, R. (2011). Procedural instructions, principles, and Ricci, K. E., Salas, E., & Cannon-Bowers, J. A. (1996). Do computer-based games
examples: How to structure instructions for procedural tasks to enhance facilitate knowledge acquisition and retention? Military Psychology, 8(4),
performance, learning, and transfer. Human Factors, 53(6), 749–770. http:// 295–307. http://dx.doi.org/10.1207/s15327876mp0804_3.
dx.doi.org/10.1177/0018720811419154. Richardson, A. E., Powers, M. E., & Bousquet, L. G. (2011). Video game experience
Gallien, T., & Oomen-Early, J. (2008). Personalized versus collective instructor predicts virtual, but not real navigation performance. Computers in Human
feedback in the online courseroom: Does type of feedback affect student Behavior, 27(1), 552–560. http://dx.doi.org/10.1016/j.chb.2010.10.003.
satisfaction, academic performance and perceived connectedness with the Salas, E., & Cannon-Bowers, J. A. (2000). The anatomy of team training. In S. Tobias &
instructor? International Journal on E-Learning, 7(3), 463–476. J. D. Fletcher (Eds.), Training and retraining: A handbook for business, industry,
Goodman, J. S., Wood, R. E., & Chen, Z. (2011). Feedback specificity, information government, and the military (pp. 312–335). New York: Macmillan Reference.
processing, and transfer of training. Organizational Behavior and Human Decision Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research,
Processes, 115(2), 253–267. http://dx.doi.org/10.1016/j.obhdp. 2011.01.001. 78(1), 153–189. http://dx.doi.org/10.3102/0034654307313795.
Goodman, J. S., Wood, R. E., & Hendrickx, M. (2004). Feedback specificity, Smits, M. B., Boon, J., Sluijsmans, D. A., & van Gog, T. (2008). Content and timing of
exploration, and learning. Journal of Applied Psychology, 89(2), 248–262. feedback in a web-based learning environment: Effects on learning as a
http://dx.doi.org/10.1037/0021-9010.89.2.248. function of prior knowledge. Interactive Learning Environments, 16(2), 183–193.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): http://dx.doi.org/10.1080/10494820701365952.
Results of empirical and theoretical research. In P. A. Hancock, N. Meshkati, P. A. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional
Hancock, & N. Meshkati (Eds.), Human mental workload (pp. 139–183). Oxford design. Learning and Instruction, 4(4), 295–312. http://dx.doi.org/10.1016/0959-
England: North-Holland. 4752(94)90003-5.
Hays, R. T. (2005). The effectiveness of instructional games: A literature review and
discussion (NAW CTSD technical report 2005–004). Orlando: Naval Air Warfare
Center Training Systems Division.

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy