Sergeetal 2013
Sergeetal 2013
net/publication/257252786
CITATIONS READS
61 346
4 authors, including:
Cheryl Johnson
Leidos
40 PUBLICATIONS 1,531 CITATIONS
SEE PROFILE
All content following this page was uploaded by Cheryl Johnson on 02 February 2023.
a r t i c l e i n f o a b s t r a c t
Article history: Training in virtual environments (VEs) has the potential to establish mental models and task mastery
Available online 11 November 2012 while providing a safe environment in which to practice. Performance feedback is known to contribute
to this learning; however, the most effective ways to provide feedback in VEs have not been established.
Keywords: The present study examined the effects of differing feedback content, focusing on adaptive feedback. Par-
Adaptive feedback ticipants learned search procedures during multiple missions in a VE. A control group received only a per-
Game-based training formance score after each mission. Two groups additionally received either detailed or general feedback
Instruction
after each mission, while two other groups received feedback that adapted based on their performance
Virtual environments
(either detailed-to-general, or general-to-detailed). Groups that received detailed feedback from the start
of training had faster performance improvement than all other groups; however, all feedback groups
showed improved performance and by the fourth mission performed at levels above the control group.
Results suggest that detailed feedback early in the training cycle is the most beneficial for the fastest
learning of new task skills in VEs.
Ó 2012 Elsevier Ltd. All rights reserved.
0747-5632/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.chb.2012.10.007
S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158 1151
Formative feedback can vary in its level of specificity, which the interaction with the game and no clear direction as to how to
determines how much detail is presented in a feedback message perform the task correctly. On the other hand, as mastery in-
(Goodman, Wood, & Hendrickx, 2004). Formative feedback can creases, providing feedback on information the student already
range from very specific and detailed, to very general and vague knows can be distracting, and add unnecessary cognitive load.
(Davis, Carson, Ammeter, & Treadway, 2005; Shute, 2008). Detailed Information that is relevant and beneficial for a novice may result
feedback provides information that is directive in that it clearly in extraneous cognitive load for an expert (Kalyuga et al., 2003;
specifies what the trainee needs to revise (Black & Wiliam, 1998; Paas, Renkl, & Sweller, 2003). The implication is that novices may
Shute, 2008). General feedback is less directive and relies more need a lot of detailed feedback, but that level of detail may also
on trainee inference regarding revision of behavior (Black & need to be decreased as skill increases (i.e., the expertise-reversal
Wiliam, 1998). Shute (2008) concluded that detailed feedback effect, Kalyuga et al., 2003). This is a tactic used in one-on-one
may be more effective than general feedback, but also suggested tutoring. Typically the tutor adapts the amount of information pro-
that this is more of a basic guideline and may not be true in all vided based on the learner’s real-time performance (Gallien &
situations. For example, researchers have shown that highly Oomen-Early, 2008). Too much support from the tutor can induce
specific feedback can be beneficial for training while the student reliance on support; therefore a good tutor adjusts the amount of
is inexperienced on a particular task (Davis et al., 2005), but can detail provided as student mastery increases.
hinder enduring knowledge and performance on transfer tasks in
GBT (Goodman, Wood, & Chen, 2011; Goodman et al., 2004).
2.2. Adapting feedback in training
Fortunately, game-based environments offer opportunities for
feedback to adapt to the individual as they go, much more so than
The current study expanded on Billings’ (2010) experiment on
in less interactive training (e.g., written material, large classroom
adaptive feedback. In Billings’ study, participants were trained to
lectures). In fact, GBT can provide a continuous source of feedback
perform a search and identify task, first by reading about the pro-
so that trainees can track their own progress towards a goal, which
cedures of the task, then performing the task in a virtual game
is crucial since feedback improves learning through both its infor-
environment comprised of a small town with numerous buildings.
mational and motivational qualities (Bransford, Brown, & Cocking,
Participants received feedback based on their errors at the end of
1999; Salas & Cannon-Bowers, 2000). However, while performance
each mission. There were three static feedback conditions –
feedback is sometimes common in GBT and is considered an essen-
detailed, general, outcome (control) – and two adaptive-based
tial factor to learning within these environments (Ricci, Salas, &
feedback conditions. One of the adaptive groups was given
Cannon-Bowers, 1996), the relative effectiveness of different types
detailed feedback that switched to general feedback as scores
of feedback within GBT remains an open question. The goal of the
improved past a set criterion. The other adaptive group started
present experiment was to investigate how different types of feed-
with general feedback that changed to detailed feedback if perfor-
back (detailed vs. general) influence acquisition of new task proce-
mance failed to improve from the previous mission score. Results
dures in a game-based training environment. We hypothesized
indicated that the detailed and both adaptive conditions performed
that the appropriate level of specificity might depend on the exper-
better than the control group and showed performance improve-
tise level of the trainee. Therefore, besides examining the effect of
ments over time. Additionally, the detailed-to-general condition
detailed vs. general feedback, we also investigated the effect of
was able to reach a higher level of performance at a quicker rate
adapting the feedback content based on the trainee’s level of per-
than the general-to-detailed group, indicating that highly detailed
formance on the preceding task mission.
feedback early in the learning process may lead to faster learning.
2.1. Cognitive load The current effort included similar conditions but expanded on
the Billings study by implementing more refined and stringent cri-
Games and VEs are attractive for training because they can rep- teria for adaptation within the adaptive feedback conditions. Sim-
licate artifacts and situations important to schema development, ilarly, the feedback given to participants consisted of either static
but they also have the potential to create additional levels of cog- or adaptive feedback. The static feedback remains consistent in le-
nitive load (compared to live training). Cognitive load theory (CLT) vel of detail throughout training: (1) ‘‘Detailed’’ static feedback is
is largely based on the assumption of a highly limited working specific information provided to the trainees regarding what tasks
memory and vast long-term memory where schemas are stored they are performing incorrectly, and (2) ‘‘General’’ static feedback
and recalled when needed. CLT may help explain why detailed is vague information that only states the learning area(s) in which
feedback may be more effective than general feedback in some sit- the trainees are committing errors. These descriptions are consis-
uations but not others. According to CLT, there are different types tent with previous research on feedback specificity effects on
of cognitive load that are derived from the material-to-be-learned learning (Black & Wiliam, 1998; Shute, 2008). Participants in the
(intrinsic), the training environment (extraneous), and the trainee. static general condition in this study were given the option to re-
The information being learned has an intrinsic cognitive load (i.e., view the training manual after each experimental session. This
the level each item or procedure can be learned individually and change is described in more detail below.
independently of each other; Sweller, 1994), which can determine The present experiment also examined methods of adapting
how difficult it is to learn and perform. Learning how to play the feedback based on different initial levels of specificity. Detailed-
game is a task on its own and may represent an increase in extra- to-general (DG) adaptive feedback provided highly detailed feed-
neous cognitive load, which can lead to a less optimal learning back on any errors that were made; then, as trainee competency
environment (Rey & Buchwald, 2011; Sweller, 1994). increased or remained above the performance criteria, the type
Feedback relevant to the task being trained may help guide of feedback switched to general. The rationale for providing DG
learning and schema development for the overall task, potentially feedback is based on scaffolding and CLT, which supports the no-
lowering the cognitive load of the task while learning is taking tion that novices benefit most from initial detailed feedback, lead-
place. Without specific guidance or instruction during new task ing them step by step through a process, which results in better
training, trainees may find the training overwhelming due to both schemata creation (Clark, Nguyen, & Sweller, 2006).
1152 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158
temporal demand, perceived performance, effort level, and frustra- feedback, participants were asked to close the feedback dialogue
tion. Individuals made a tic-mark along a scale with 21 gradients. box and completed the NASA TLX and CLQ. Participants in the gen-
Individual items and average scores were used to determine levels eral condition were also given the option of looking back at the
of workload. training material after receiving their feedback. The researcher
sent a text message asking them if they wanted to look at the man-
3.3.6. Knowledge pre- and post-tests ual again. They could choose to look or not look and their answers
Participants were given pre-test (baseline) and post-test knowl- were logged in the system and noted by researchers. After this pro-
edge quizzes to assess comprehension of the training materials cess was completed, all participants were given the briefing for the
used in the study. Ten multiple choice questions asked participants next mission. This was repeated for each of the sessions. When the
about the search and identify task procedure. The pre-test was gi- final mission was finished, participants were asked to complete a
ven prior to participants receiving any information about the task. final post-knowledge test of the search and identify procedures.
The post-test was given after all missions were concluded. No performance feedback was given after the last mission.
Participants were first asked to give consent for participation Preliminary analyses were conducted to determine if significant
and given an opportunity to ask questions regarding the experi- differences existed on any of the pre-test or demographic measures
ment. Then they filled out the demographic, VGE, and the pre-test that may have affected mission performance between groups. No
questionnaires. significant differences were found between conditions for any of
Participants were then trained on the operation of the comput- the demographic or video game variables. Mission 1 performance
ers that they would use for the experiment. They were then given scores were not significantly different between feedback condi-
up to 15 min to read and learn the search and identify task proce- tions. This was expected as no group had received any type of feed-
dures. The procedures were organized into three specific terminal back intervention until after Mission 1. Based on Billings’ (2010)
learning objectives, which consisted of: entering and exiting build- findings, the measure of VGE was an expected covariate for the
ings, proper methods for searching buildings, and communication analysis. However, it failed to meet all the required assumptions
procedures. Participants were allowed to ask the experimenter of covariates and was left out of the main analyses.
for clarification of a specific rule or procedure during the 15-min
period. However, questions were not allowed once the first mission
4.1. Manipulation checks
began.
Once finished reading the training manual, and immediately
Scores were examined to determine which adaptation occurred
prior to beginning each of the subsequent missions, participants
in the two adaptive conditions (e.g., the type of feedback those par-
were provided with a one-page mission briefing that presented
ticipants received went from DG or GD based on condition). Every
them with the premise and reason for the search and identify mis-
participant in the two adaptive conditions had a switch in feedback
sion. The briefing stated that they were to assume the role of a
type, although the mission on which the switch occurred varied,
Forensics Officer who was given the task of searching a particular
according to their performance. Thus, the adaptive conditions did
selection of buildings in the town for a missing Alzheimer’s patient.
adapt and were in fact different from the static conditions (e.g., De-
The briefing also covered which buildings they were responsible
tailed and General).
for searching and presented them with a description and photo-
graph of both the Alzheimer’s patient and any additional target
items that needed to be reported. A maximum of 10 min was allot- 4.2. Comparison of workload
ted for each mission.
Upon completion of an individual mission or time expiring, No significant differences existed on TLX or CLQ scores between
whichever occurred first, participants first completed the any of the five feedback conditions for any mission.
NASA-TLX short form. Then the experimenter delivered feedback
electronically via the text message computer. The content of the 4.3. Comparison of feedback conditions on mission performance
feedback was determined by the condition to which the participant
was assigned, as well as the performance of the participant (see Table 2 shows the mean performance scores for each condition
Table 1 for examples). Every condition included a percentage- over all missions. A mixed between-within ANOVA was conducted
correct score in the feedback dialogue boxes. After reading the to assess the performance differences between each of the five
Table 1
Description of experimental conditions and feedback content.
Table 2
Means and standard deviations for performance scores for all conditions over all missions.
Note: Superscripts indicate within group (feedback condition) significance of mission performance.
a
Sig. different at p < .05 than the mission prior.
b
Sig. different at p < .05 of Mission 1.
c
Sig. different at p < .05 different than Missions 1 and 2.
initially receiving detailed feedback may lead to faster learning of that differences existed on some VGE and mental demand work-
the training material. load scores between the manual and no-manual groups. First, peo-
ple choosing to review the manual (M = .833, SD = .892) had
4.3.3. Pre–post tests significantly higher ratings on VGE than those that did not
Pre and posttest means between groups were examined, and (M = .019, SD = .793; t(20) = 2.395, p = .027). It also appeared that
comparisons were made based on condition using the nonpara- participants choosing to review the manual (M = 6.29, SD = 3.25)
metric Kruskal–Wallis Test. As expected, pretest scores were not had significantly lower ratings on the mental demand scale of
significantly different between each group. Posttest scores the NASA TLX than those that did not (M = 4.18, SD = 1.46;
(M = 8.65, SD = 1.23) showed a significant increase from pretest t(20) = 2.87, p = .010). Implications of this are discussed later.
(M = 2.01, SD = 1.15) scores (t(103) = 41.56, p < .001). Table 3 These results support Hypothesis 2 in that looking at the man-
shows results comparing scores between conditions for posttest ual helped those in the general feedback condition improve their
scores. The Kruskal–Wallis test was significant (v2 (4, performance over time.
N = 104) = 13.89, p = .008). However, individual comparisons using
Mann–Whitney U and Wilcox tests indicated only one significant
difference between all groups; the Detailed feedback group scored 4.4.2. Exploratory comparison to other feedback groups
significantly higher than the control (z = 3.498, p < .001). After completing the initial analyses, an additional analysis was
used to examine how reviewing the manual in the general feed-
4.4. Training manual effects for the general feedback condition back group compared to all other groups (Fig. 3). Results showed
that, over all missions, the manual group did not differ significantly
Participants in the general condition had the option of review- from the detailed feedback group (p = .409). The manual group did,
ing the training manual after each of the first three missions. This however, show significant improvement over both the control and
was done to examine whether people in this condition were simply the no-manual group (p < .001, p = .028, respectively) by Mission 3
forgetting the information from the training manual and the gen- and maintained this trend vs. the control group in Mission 4
eral level feedback may not have been enough to elicit recall. Fifty (p < .001).
percent of the participants in this condition never opted to review Likewise, results also showed that, over all missions, the no-
the manual again, and 18% opted to do so once. The rest consulted manual group did not differ significantly from the control group
the manual two or three times. For purposes of analysis, the gen- (p = .522). The no-manual group’s performance scores were signif-
eral feedback group was split into two subgroups. People that con- icantly lower than every other condition’s scores, aside from the
sulted the manual two or three times were assigned to a General control, on Mission 3. However, scores on every other mission,
Manual group (N = 7) and the rest were assigned to a General including Mission 4, were not statistically different from every
No-Manual group (N = 15). A repeated-measures ANOVA was con- other feedback group.
ducted to first analyze the differences between the manual group These results shed some interesting light on how effective ac-
(manual) and the no-manual group (no-manual). There was a sig- cess to the manual can be when trainees are only given a vague le-
nificant main effect of mission (F(2.08, 41.65) = 5.78, p = .002) and a vel of performance feedback. Participants who looked at the
significant mission by condition interaction (F(2.08, 41.65) = 4.23, manual did not differ significantly from any other condition that
p = .020). Table 4 presents descriptive statistics for each new group received detailed feedback (i.e., Detailed, adaptive DG, adaptive
and their mission performance. GD). Referring to the manual enabled trainees to increase their per-
Further examination of the effects revealed that the entire mis- formance level to a comparable level as with the detailed feedback
sion effect was a result of the manual group’s mission performance and was significantly higher than the control. The no-manual
over time (F(3, 18) = 11.224, p < .001; see Fig. 2). The no-manual group scores were consistently similar to those of the control
group did not significantly change their performance over the four group over all missions. However, it is important to note that the
missions (p = .397). Examining post hoc tests from the ANOVA, no-manual group scores were only significantly lower than other
using Bonferroni correction, revealed that the manual group Mis- feedback groups on Mission 3. No-manual group scores, while low-
sions 3 (p < .001) and 4 (p = .001) performance scores were signif- er, were not significantly different from the other feedback group
icantly better than Mission 1 performance scores. Mission 2 scores scores on Missions 1, 2, and 4.
were also higher than Mission 1 scores but this difference was not
significant. Examining the differences between groups on missions
using t-tests revealed that the manual group outperformed the no- 5. Discussion
manual group by Missions 3 and 4 (ts > 2.69; Bonferroni corrected
ps < .05). The no-manual group’s performance remained consis- This study sought to investigate how different styles of feed-
tently low throughout the four missions while the manual group’s back, particularly adaptive feedback, within a GBT system affect
performance increased. a trainee’s performance within that environment. Ultimately, the
goal of this experiment was to provide answers to three broad
4.4.1. CLQ, workload, and VGE questions regarding feedback in GBT. First, is adapting feedback
Since reviewing the manual was a choice for people within this based on an individual’s performance better than providing static
condition, and only some chose to do so, further analyses revealed feedback in GBT? We predicted, based on CLT and previous re-
search, that DG feedback would result in better learning outcomes
and performance than all other feedback styles. Second, does
Table 3 receiving detailed feedback at some point in GBT lead to better
Means and Mann–Whitney U tests between each feedback condition and the control
on posttest scores.
training? In other words, we predicted that participants who re-
ceived detailed feedback at some point would perform better than
Condition N Mann–Whitney U Wilcoxon W z p if they did not. Third, why is detailed feedback typically more
Detailed vs. Control 37 61.500 232.500 3.498 .001 effective than general feedback? Participants in the general condi-
General vs. Control 40 172.000 343.000 .741 .459 tion were given the option of refreshing their memory to see if
DG vs. Control 41 139.500 310.500 1.836 .066
GD vs. Control 40 146.000 317.000 1.480 .139
forgetting could possibly explain why general feedback has been
found to be less effective.
1156 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158
Table 4
Means and SDs for the manual and no-manual groups created from the general feedback condition.
detailed feedback, adaptive or self-directed, is beneficial during the extraneous cognitive load, leaving fewer resources available for
training process and detailed feedback up front leads to immediate learning the material, resulting in poor task performance.
improvement. However, it appears that those receiving detailed- The results of the present research are significant in a number of
level feedback at any point during the gaming session (i.e., the De- ways. First, it presents some evidence that general feedback alone
tailed, DG, GD groups) were eventually able to perform comparably may not be enough to elicit recall of recently trained task proce-
to each other. dures without some level of prior task knowledge or some other
In terms of the third question, we predicted that trainees were type of intervention, such as access to the training manual (Smits,
simply forgetting what they had read in the manual, therefore Boon, Sluijsmans, & van Gog, 2008). These findings may also pro-
leading to poorer performance from the General feedback group. vide some explanation as to why general feedback has been found
In other words, participants who received detailed feedback were to be less effective than detailed feedback, because trainees may
reminded what they forgot, or failed to learn, and improved their simply have forgotten the information and providing general feed-
performance. On the other hand, trainees who received general back is not enough to prompt them. These results indicate that pro-
feedback learned that they were underperforming on some objec- viding any additional detail about errors, including the ability to
tive, but never learned the exact procedures they were forgetting, review the manual, leads to better performance and promotes re-
so they never improved. To explore this and expand on Billings’ call of poorly learned or forgotten information.
(2010) original findings, the general feedback group was divided
into those that reviewed the manual, which was an addition to Bill-
5.1. Study limitations and future research
ings’ original general feedback condition, vs. those that did not,
which mimicked Billings’ original manipulation. Reviewing the
While these results present some significant steps forward in
manual helped trainees improve their scores and perform similarly
the investigation of feedback in GBT, there are also some limita-
to those receiving detailed or adaptive feedback. The no-manual
tions that should be addressed. The lack of significant differences
group tended to have much lower scores, similar to those of the
in cognitive load between the conditions could be reflective of both
control. In this sense, the no-manual group acted as a general-feed-
the task and measures used. While feedback was shown to gener-
back-only condition while the manual group acted as a self-inter-
ally improve performance for most feedback groups at some point
vention group. These findings lend support to the notion that
throughout the training, the task itself may have not been difficult
general feedback, alone, is ineffective. General feedback does not
enough to elicit higher levels of cognitive load, which was sug-
seem to provide support for learning. If people choose to access
gested to lead to differences in performance. Furthermore the use
the training material during training sessions, even without being
of the NASA-TLX may have been inadequate for determining the
told their specific errors, it is possible for them to correct errors
workload of users which would support predictions based on
and improve their performance. These findings also support the
CLT. Future research should utilize tasks of differing difficulty lev-
idea that trainees may not be very good regulators of their own
els and a more effective method of assessing workload to deter-
learning (e.g., Bjork, 1999; Kornell & Bjork, 2007). Some may have
mine if improvements in performance were due to performance
simply been unaware how beneficial looking at the manual was to
feedback, rather than exposure to the game itself.
improving performance.
This experiment examined how well a person learned to per-
A number of factors may contribute to the differences observed
form a task within a virtual, GBT environment given different feed-
between these groups. First, there were some interesting findings
back interventions. Despite finding significant differences in
in the comparison of results for the split general feedback groups.
performance between conditions, there was no direct measure-
The group choosing to review the manual was able to perform sim-
ment of transfer or knowledge retention after training. Opportuni-
ilarly to the detailed and both adaptive feedback groups. Those
ties for actual transfer tasks tend to be rare, but when possible,
choosing not to review the manual performed more similarly to
future research should include these tasks to accurately measure
the control. However, the no-manual scores were only lower than
the effectiveness of the training with different feedback styles. In
the detailed or adaptive condition scores on one mission (Mission
addition, individual difference factors, such as gaming experience
3). Despite this, no-manual group scores were consistently lower
or learning type, and their impacts on learning within GBTs may
and did not improve over time, opposite that of the other feedback
also be a worthwhile research topic.
groups. Earlier predictions stated that those receiving some level of
One challenge of conducting research on adaptive training is
detailed feedback would perform better than others receiving gen-
determining how to adjust the performance criteria used to deter-
eral or no feedback. In this case, the manual may have acted as a
mine feedback content for the adaptive conditions. While the per-
source of detailed feedback, especially considering that it con-
formance criteria used in this experiment was more carefully
tained all of the procedural information needed for the task.
calibrated than those used in Billings’ (2010) experiment, it is pos-
A second explanation may be found in VGE scores between the
sible that these criteria were still not sensitive enough. Further re-
manual groups. VGE was found to account for differences between
search might aid in determining optimal criterion levels; however,
individuals who chose to review the manual and those who did
the feedback criterion set for a particular task are likely different
not. It is possible that those who reported more video game expe-
for another task. More research is needed to help explain where
rience chose to review the manual because they were not as con-
and how criterion levels should be set and for each individual task.
cerned with the actual game-play and controls used in the GBT
system. The unfamiliarity with the mouse and keyboard combina-
tion for controlling the avatar in the game may have been over- 6. Conclusions
whelming to people with low gaming experience. Those with less
gaming experience may have been more focused on learning The goal of this paper was to explore the specificity of feedback
how to play the game rather than learning the proper task proce- messages for GBT systems, with a particular focus on the potential
dures within the training game, which raises some concerns benefits of performance-based adaptive feedback. While this study
regarding the deployment of GBT systems (see Adams, Mayer, demonstrated that providing adaptive feedback can be helpful,
MacNamara, Koenig, & Wainess, 2012). This finding is also consis- providing static detailed feedback was just as effective. Certainly,
tent with Sweller’s cognitive load theory and may help explain more research is needed over a broader spectrum of GBT tasks in
why general feedback is not as effective as detailed. Namely, for order to provide a clearer picture of how adaptive feedback relates
less experienced video game players, the game itself presents to learning over immediate and long-term applications. In terms of
1158 S.R. Serge et al. / Computers in Human Behavior 29 (2013) 1150–1158
immediate performance improvements, providing highly detailed Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect.
Educational Psychologist, 38(1), 23–31. http://dx.doi.org/10.1207/
feedback (whether adaptive or static) appears to speed up acquisi-
S15326985EP3801_4.
tion of procedural steps, leading to less dependence on detailed Kornell, N., & Bjork, B. A. (2007). The promise and perils of self-regulated study.
feedback later in training. Detailed feedback was the most effective Psychonomic Bulletin & Review, 14, 219–224.
way to train individuals under these circumstances. Lee, A. Y., Bond, G. D., Scarbrough, P. S., Gillan, D. J., & Cooke, N. J. (2007). Team
training and transfer in differing contexts. Cognitive Technology, 12(2), 17–29.
Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery
References learning? American Psychologist, 59(1), 14–19. http://dx.doi.org/10.1037/0003-
066X.59.1.14.
Adams, D. M., Mayer, R. E., MacNamara, A., Koenig, A., & Wainess, R. (2012). Mayer, R. E., & Johnson, C. I. (2010). Adding instructional features that promote
Narrative games for learning: Testing the discovery and narrative hypotheses. learning in a game-like environment. Journal of Educational Computing Research,
Journal of Educational Psychology, 104(1), 235–249. http://dx.doi.org/10.1037/ 42(3), 241–265. http://dx.doi.org/10.2190/EC.42.3.a.
a0025595. Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of
Billings, D. R. (2010). Adaptive feedback in simulation-based training (Unpublished explanatory versus corrective feedback in discovery-based multimedia.
doctoral dissertation). University of Central Florida, Orlando, FL, USA. Instructional Science, 32(1–2), 99–113. http://dx.doi.org/10.1023/
Bjork, R. A. (1999). Assessing our own competence: Heuristics and illusions. In D. B:TRUC.0000021811.66966.1d.
Gopher & A. Koriat (Eds.). Attention and performance XVII: Cognitive regulation of Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector,
performance. Interaction theory and application. Cambridge, MA: MIT Press, pp. M. D. Merrill, J. J. G. van Merriënboer, & M. Driscoll (Eds.), Handbook of research
435-459. on educational communications and technology (3rd ed., pp. 125–144). New York:
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Lawrence Erlbaum Associates.
Education: Principles, Policy & Practice, 5, 7–74. Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem-solving
Boot, W. R., Neider, M. B., & Kramer, A. F. (2009). Training and transfer of training in skills in statistics: A cognitive-load approach. Journal of Educational Psychology,
the search for camouflaged targets. Attention, Perception, & Psychophysics, 71(4), 84(4), 429–434.
950–963. http://dx.doi.org/10.3758/APP.71.4.950. Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional
Bransford, J., Brown, A. L., & Cocking, R. R. (Eds.). (1999). How people learn: Brain, design: Recent developments. Educational Psychologist, 38(1), 1–4. http://
mind, experience, and school. Washington, DC: National Academy Press for dx.doi.org/10.1207/S15326985EP3801_1.
National Research Council. Pea, R. D. (2004). The social and technological dimensions of scaffolding and related
Clark, R. C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based theoretical concepts for learning, education, and human activity. Journal of the
guidelines to manage cognitive load. San Francisco, CA: Pfieffer, A Wiley Imprint. Learning Sciences, 13(3), 423–451. http://dx.doi.org/10.1207/
Davis, W. D., Carson, C. M., Ammeter, A. P., & Treadway, D. C. (2005). The interactive s15327809jls1303_6.
effects of goal orientation and feedback specificity on task performance. Human Phye, G. D., & Sanders, C. E. (1994). Advice and feedback: Elements of practice for
Performance, 18(4), 409–426. http://dx.doi.org/10.1207/s15327043hup1804_7. problem solving. Contemporary Educational Psychology, 19(3), 286–301. http://
Day, S. B., & Goldstone, R. L. (2011). Analogical transfer from a simulated physical dx.doi.org/10.1006/ceps.1994.1022.
system. Journal of Experimental Psychology: Learning, Memory, and Cognition, Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring
37(3), 551–567. http://dx.doi.org/10.1037/a0022333. and problematizing student work. Journal of the Learning Sciences, 13(3),
Duffy, V. C., Ng, P. P. W., & Ramakrishnan, A. (2004). Impact of a simulated accident 273–304. http://dx.doi.org/10.1207/s15327809jls1303_2.
in virtual training on decision-making performance. International Journal of Rey, G., & Buchwald, F. (2011). The expertise reversal effect: Cognitive load and
Industrial Ergonomics, 34(4), 335–348. http://dx.doi.org/10.1016/ motivational explanations. Journal of Experimental Psychology: Applied, 17(1),
j.ergon.2004.04.012. 33–48. http://dx.doi.org/10.1037/a0022243.
Eiriksdottir, E., & Catrambone, R. (2011). Procedural instructions, principles, and Ricci, K. E., Salas, E., & Cannon-Bowers, J. A. (1996). Do computer-based games
examples: How to structure instructions for procedural tasks to enhance facilitate knowledge acquisition and retention? Military Psychology, 8(4),
performance, learning, and transfer. Human Factors, 53(6), 749–770. http:// 295–307. http://dx.doi.org/10.1207/s15327876mp0804_3.
dx.doi.org/10.1177/0018720811419154. Richardson, A. E., Powers, M. E., & Bousquet, L. G. (2011). Video game experience
Gallien, T., & Oomen-Early, J. (2008). Personalized versus collective instructor predicts virtual, but not real navigation performance. Computers in Human
feedback in the online courseroom: Does type of feedback affect student Behavior, 27(1), 552–560. http://dx.doi.org/10.1016/j.chb.2010.10.003.
satisfaction, academic performance and perceived connectedness with the Salas, E., & Cannon-Bowers, J. A. (2000). The anatomy of team training. In S. Tobias &
instructor? International Journal on E-Learning, 7(3), 463–476. J. D. Fletcher (Eds.), Training and retraining: A handbook for business, industry,
Goodman, J. S., Wood, R. E., & Chen, Z. (2011). Feedback specificity, information government, and the military (pp. 312–335). New York: Macmillan Reference.
processing, and transfer of training. Organizational Behavior and Human Decision Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research,
Processes, 115(2), 253–267. http://dx.doi.org/10.1016/j.obhdp. 2011.01.001. 78(1), 153–189. http://dx.doi.org/10.3102/0034654307313795.
Goodman, J. S., Wood, R. E., & Hendrickx, M. (2004). Feedback specificity, Smits, M. B., Boon, J., Sluijsmans, D. A., & van Gog, T. (2008). Content and timing of
exploration, and learning. Journal of Applied Psychology, 89(2), 248–262. feedback in a web-based learning environment: Effects on learning as a
http://dx.doi.org/10.1037/0021-9010.89.2.248. function of prior knowledge. Interactive Learning Environments, 16(2), 183–193.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): http://dx.doi.org/10.1080/10494820701365952.
Results of empirical and theoretical research. In P. A. Hancock, N. Meshkati, P. A. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional
Hancock, & N. Meshkati (Eds.), Human mental workload (pp. 139–183). Oxford design. Learning and Instruction, 4(4), 295–312. http://dx.doi.org/10.1016/0959-
England: North-Holland. 4752(94)90003-5.
Hays, R. T. (2005). The effectiveness of instructional games: A literature review and
discussion (NAW CTSD technical report 2005–004). Orlando: Naval Air Warfare
Center Training Systems Division.