Castillon HCII 2024
Castillon HCII 2024
1 Introduction
The internal state of familiarity has been the subject of a large body of work in
cognitive psychology. It is often associated with recognition memory. According
to dual process theories of recognition, familiarity is one of two processes that
can lead to recognizing having had prior experience with something (the other
process being recollection, or a calling to mind of a specific prior instance in
which the present stimulus was encountered) [25, 9].
Though there are many different theories regarding how familiarity and rec-
ollection might relate to one another, it has recently been suggested that initially
sensing familiarity may trigger the search of memory that leads to recollection
[12, 26, 30]. Following from this general theoretical framework, in the present
work, we sought to capture instances of familiarity the moment a participant
sensed it, regardless of whether that initial sense of familiarity ultimately led to
recall success or not.
Training models to detect familiarity requires a substantial dataset. There-
fore, a crucial aspect of our study relied on an experimental paradigm from
2 I. Castillon et al.
2 Related Work
Although this study is novel in its attempt to automatically detect instances
of subjective familiarity experienced during scenes resembling previously experi-
Automatically Identifying Familiarity using Eye Gaze Features 3
enced scenes, the idea that subjective recognition and other cognitive states may
be automatically detected is not new, and it has shown to be promising in past
studies. In one study, a model based on eye gaze was able to classify whether
participants had previously viewed an image before with an average accuracy of
68.7% [18]. Based on gaze data collected when participants were presented with
an image, the models in Nishimura et al. (2012) attempted to classify whether
the participants had previously viewed this exact image in an earlier part of the
experiment. This classification was done independent of whether participants
explicitly indicated recognition. In contrast, in the current work, the goal is to
classify instances only where participants are experiencing the feeling of famil-
iarity. In that way, this study places a greater emphasis on detecting the internal
state of participants. Additionally, the familiarity in this work is evoked from
configurally similar, but non-identical scenes. This study’s focus on familiarity
is intrinsically linked to the subject of cognitive states. Much of the research
done on the detection of internal states has been done with respect to mind
wandering — the shift of attention away from a particular task. Models that
include or rely on non-gazed based features have been investigated [5, 3, 8], but
models built using gaze-based features have been the most effective at detecting
mind-wandering [21, 24]. As such, the features we used in our models were all
gaze-based. Many studies have attempted detection of mind wandering either in
the context of reading [17, 3, 4], or while watching videos [28, 20]. This study is
the first to investigate the detection of internal states in the context of virtual
reality. In combination with global gaze-features, some studies included local fea-
tures that were informed by the gaze direction relative to the text being read [4]
or particular areas of interest in the film being watched [28]. The features in our
model resemble the global features in these studies that were independent of any
context. Furthermore, most of these studies used probe-based detection of mind-
wandering [3, 4, 20] because mind wandering often occurs without the individual
being immediately aware of it. However, self-caught reports of mind wandering
have also been incorporated [17], and we primarily utilized this method in the
present study to more closely pinpoint the moment at which familiarity was
experienced by the participant. It should be noted that a probe was included
after every scene in the non-virtual reality experiment. Our study builds on the
current body of knowledge by applying the previous findings and techniques
from detecting mind wandering to detecting another cognitive state: the sense
of familiarity.
Familiarity Task Following from prior research that used a virtual tour paradigm
to induce the sensation of familiarity in the laboratory [11, 30], participants in
the present study viewed virtual tours through various scenes via videos on a
computer screen containing walk-throughs of virtual environments.
4 I. Castillon et al.
In prior research using the virtual tour task [11], in the study phase, partici-
pants were taken through settings they had never seen before. While the virtual
tour of the study phase scenes took place, the name of the scene was played
aloud through speakers. For example, if viewing a golf course, a voice would
state ”This is a golf course.” Participants were asked to try to remember each
study phase scene along with its name. In the test phase, participants were taken
through entirely new settings, some of which had the same spatial layout as an
earlier toured scene from the study phase. For example, a clothing store scene
may have the same arrangement of elements relative to one another as an ear-
lier toured bedroom scene. In short, an otherwise novel scene in the test phase
may share a spatial configuration with a scene from the study phase. No sound
accompanied the viewing of the test phase videos.
To establish the same configuration of elements from study to test without
explicitly duplicating the objects, a grid layout was used to create spatially
mapped but otherwise novel scenes as shown in figure 1.
Fig. 1: (A) Grid Used to Create Same Spatial Configuration; (B) Sample ”Aquar-
ium” Study Scene; (C) Sample ”Reception Area” Test Scene Corresponding to
”Aquarium” Study Scene [10].
Fig. 2: Sample ”Alley” Study Scene and Configurally Similar ”Hallway” Test
Scene [10].
ALLEY HALLWAY
COURTYARD MUSEUM
present study were those used by Okada et al. (2023) in their Experiments 2a
and 2b.
Procedure Participants were brought into a test room where they sat at a desk
with a computer connected to an eye tracker and webcam.3 At the beginning of
the experiment, the eye tracker was calibrated for each participant.
Once the task began, participants were asked to watch a series of study
videos where they were pulled through various scenes as previously discussed in
3
A photo of the experiment hardware can be found in [36].
6 I. Castillon et al.
the familiarity task. Once the test phase began, participants were instructed to
hit the ‘up key arrow’ on the keyboard at any point that they felt a sense of
familiarity. This key was labeled with a bright yellow sticker to make it easier
for participants to find. Partcipants were instructed to keep their finger on the
key, ready to press it, so that they would not need to look down at the key. The
task took participants around an hour to complete. All participants completed
two study-test blocks, watching a total of 96 videos, each under 30 seconds long.
Eye Tracking and Feature Generation In this work, we utilize the Tobii
Pro Fusion eye tracker and PyTrack, an end-to-end open-source solution for the
analysis and visualization of eye tracking data [19]. This eye tracker captures
250 images per second (250 Hz) and has two built in pupil tracking modules.
PyTrack was used to extract parameters of interest such as blinks, saccade count,
average pupil size, etc.
Microsaccade
Fixation Fixation
needed a minimum buffer of 1.5 seconds, but we extended that to two seconds
as a precaution. We then examined the 1 second window of time before the two
second buffer and used this window to extract eye gaze features. An image of this
timeline can be found in figure 5. Unfortunately, to extract features we needed
at least three seconds of data prior to the key press (the two second buffer and
the one second window). Of our 698 instances of self-reported familiarity, only
263 had three seconds of time before the button press – this is likely because
familiarity onset much more rapidly than we anticipated.
FAMILIARITY BUFFER
1 SEC 2 SEC
3.4 Results
Using the Support Vector Classifier (SVC) algorithm, we trained a machine
learning model that identified familiarity with a Cohen’s Kappa of 0.22 (SD =
0.42) and an F1 score of 0.56 (SD = 0.23). This performance is in line with other
predictors of internal cognitive states, such as mind wandering [5].
The high standard deviation values among metrics, as shown in Table 1, re-
flect a large amount of variation among participants. This mixed performance
indicates individual variation in eye gaze patterns that emerge as one experi-
ences familiarity. Additionally, the amount of instances reported by participants
seems to affect the models performance. Participants that reported only one or
two instances of familiarity ended up on the polar ends of the Cohen’s Kappa
distribution. However, the majority of participant’s kappa scores range from 0
to 0.65, as can be seen in Figure 6.
Procedure Participants were brought into a test room where they sat in a
chair in the center of the room. They were asked to sit for the duration of the
experiment to prevent motion sickness. While sitting, the participant was fitted
with the HTC Vive Pro Eye headset and instructed on how to position the ear
phones to adjust the sound level. Once fitted and wearing the VR headset, the
participant was given the VR hand controllers and instructed on their general
use. The participant was then taken through a calibration procedure for the eye
tracking component of the study. The calibration procedure involved adjusting
the interpupillary distance for the participant then running a procedure that
instructed the participant to look in specific directions at particular moments
in time while the eye tracker tracked their eye movements and automatically
calibrated the tracking to their movements.
Once the experimental procedure began, participants were sequentially placed
within each scene for a fixed duration. Within each scene, the participant had
the ability to explore their surroundings by turning their head to look around.
For the study portion of the experiment, participants were instructed to: Do
your best to try to remember that scene along with what its name is. While view-
ing each of these scenes, a voice will play through the VR headphones telling the
name of the scene. For example, while viewing a golf course, the voice would
say “This is a golf course. Golf course.” Try to also remember the name so
that you can convey this later on if asked about earlier-viewed scenes. After the
study phase ended, participants were asked if they needed a break from the VR
immersion.
For the test portion, before the scenes began to play, participants saw the
instruction: “If the scene starts to feel familiar to you, push the button under
your THUMB to indicate that it feels familiar. Try to do this AS SOON as
10 I. Castillon et al.
you start to feel a sense of familiarity with the scene. Specifically, if the scene
reminds you of a specific scene that you viewed earlier. Let the experimenter
know what that scene is that this scene is reminding you of. Sometimes, a scene
may remind you of a similar-looking scene from earlier. Whenever this happens
(even if you did not push the button) please tell the experimenter the name of
the earlier-viewed scene. Even if the test scene did not remind you of a specific
earlier-viewed scene.” When participants pressed the button to indicate famil-
iarity, the experimenter was made aware through a message logged to the Unity
terminal. Participants were then asked if they could identify possible reasons for
the familiarity, and were continuously reminded that sometimes they may be
able to identify a reason for any perceived familiarity with a scene and other
times they may not. Most of the answers that participants gave regarding the
source of their familiarity corresponded to earlier viewed scenes. However, some
participants indicated some scenes reminded them of other locations, such as ”a
friend’s basement.” These answers were logged on paper by the experimenters
and recorded on a microphone. Similarly to the two-dimensional familiarity task,
participants completed two blocks of the study and test phases.
Eye Tracking and Feature Generation The HTC Vive Pro Eye is a virtual
reality headset with built-in infrared-based eye tracking technology developed by
HTC Corporation. Prior research suggests that the HTC Vive Pro Eye validly
measures eye movement metrics of interest to scientists [35]. The headset was
used to collect eye tracking data within Unity from the participants while they
were in the virtual environments designed for this experiment. Eye tracking
data was collected using the SRanipal software development kit (SDK) version
1.3.6.8 for Unity provided by the HTC Corporation [1]. Previous work has shown
timestamped eye tracking data from this device collected with Unity and the
SRanipal SDK can be used for accurately assessing saccadic eye movements
[38].
The SRanipal SDK allowed us to easily record the following eye measure-
ments: pupil position, pupil diameter, eye openness, gaze origin, and gaze direc-
tion. The data was collected into a buffer at roughly 120hz in a dedicated thread
using the SRanipal callback registration function, and the buffer was written to
a file in the form of comma-separated values (CSVs) at the end of each scene.
Each time eye tracking data points were collected, we also recorded the current
Unix timestamp from the computer running the program using the DateTime
struct. While the SRanipal SDK does provide a timestamp data point, previous
work has shown that this timestamp has been inaccurate and error-prone in
previous versions of the SDK [38]. While bugs relating to the timestamp may
have been fixed in the current latest version of the SRanipal SDK, we opted
to use the system time rather than confirm that the issues have been resolved.
Additionally, using the Unity ActionBasedController class, we recorded whether
the participant was pressing the button on the HTC Vive Controller that they
were instructed to press to indicate a sense of familiarity.
Automatically Identifying Familiarity using Eye Gaze Features 11
This data collection approach allowed us to generate two CSV files per par-
ticipant, one for each block, each containing nearly one hundred thousand times-
tamped rows of data. Each row contained the eye measurements described above,
the status of the familiarity indication button (pressed or unpressed), and the
current VR scene the participant was in at that point in time. We acquired a
row of data every 8.33 milliseconds on average throughout the experiment.
Similarly to the methods described in section 3.1, PyTrack was used to ex-
tract parameters of interest [19]. Table 2 shows descriptive statistics for a portion
of eye gaze features. All of eye gaze features generated from this experiment can
be found listed in Figure 4.
Familiarity Non-Familiarity
Feature Mean SD Min Max Mean SD Min Max
Fixation Count 5.60 2.99 0.00 7.00 5.91 3.10 0.00 8.00
Fixation Duration (ms) 30.91 16.60 0.00 204.00 33.27 19.48 0.00 229.00
Saccade Count 3.23 1.63 0.00 4.00 3.23 1.67 0.00 4.00
Saccade Duration (ms) 48.87 36.12 0.00 316.00 43.19 36.77 0.00 236.00
Microsaccade Count 0.21 0.50 0.00 3.00 0.23 0.51 0.00 3.00
Microsaccade Duration (ms) 1.64 3.65 0.00 15.50 1.85 3.89 0.00 20.00
Blink Count 0.71 0.70 0.00 3.00 0.83 0.70 0.00 3.00
Blink Duration (ms) 24.19 39.12 0.00 232.00 32.18 48.50 0.00 298.00
4.5 Results
Using the KNN algorithm, our best model resulted in a kappa value of 0.18
(SD = 0.14). Additional evaluation metrics can be seen in Table 5. While this
Cohen’s Kappa value is slightly lower then the best model’s value for the two-
dimensional experiment, the standard deviation value is significantly lower. This
means that overall the model was not predicting a high level of confidence for
certain participants and with extremely low confidence for others. Figure 8 shows
the distribution of kappa values, which looks relatively normal.
14 I. Castillon et al.
BUTTON
PRESS
VR SCENE BEGINS VR SCENE ENDS
5 Discussion
eye gaze features. The results from both our two-dimensional and more immer-
sive three-dimensional virtual reality (VR) experiments indicate that the internal
subjective state of familiarity does manifest through the eyes. The ability to de-
16 I. Castillon et al.
tect the state of sensing familiarity through eye gaze patterns is akin to detecting
other internal cognitive states like mind wandering [5, 3, 28, 22].
The difference in the Cohen’s Kappa values between the 2D and more immersive
3D VR setups (0.22 vs. 0.18) might be attributed to the distinct nature of
interactions in these environments. VR’s more immersive nature might elicit
more natural and varied gaze patterns, particularly given that participants could
turn their heads to look around within each 3D scene, impacting the model’s
prediction capability. This disparity underscores the need to tailor eye tracking
methodologies and algorithms to the specificities of the interaction environment.
Automatically Identifying Familiarity using Eye Gaze Features 17
Despite the promising findings, there are limitations to this study - most of
these limitations stem from the need for additional research to gain a deeper
understanding of the subjective sense of familiarity as it relates to eye gaze. For
example, pupil diameter at stimulus onset as a function of eventual downstream
reporting of subjective familiarity was not examined in the present study; it is
possible that pupil size occurs upstream of the subjective sense of familiarity.
For example, Ryals et al. (2021) examined pupil size for a short time window
extending forward from stimulus onset as a function of eventual reporting of a
tip-of-the-tongue state or not and found robust pupil size differences, whereby
larger pupil diameter following stimulus onset was associated with the feeling of
a word being on the tip of the tongue [33]. It is as yet unclear if the same would
hold true for the subjective sense of familiarity, as we only examined pupil size
for a short time window extending backwards from the response button press.
Thus, there is more to be learned about the physiological responses associated
with the subjective sense of familiarity.
Also, while consistent with other works automatically identifying internal
states [3, 5], the high standard deviation in model performance suggests signif-
icant variability in individual eye gaze patterns — this variability complicates
efforts to integrate automated detection into AI [7]. The constraint of data col-
lection (only instances with a three-second window prior to reporting familiarity
were used) might have limited the scope of our analysis — future research should
aim to hold the button press constant across the familiar and unfamiliar response
options (such as by requiring a button press the moment a scene is deemed fa-
miliar by pressing the right hand controller button, or unfamiliar by pressing
the left hand controller button). Future research could also eliminate the but-
ton press altogether, using a probe-based methodology instead; while this would
eliminate the ability to assess the experience of familiarity the moment it occurs
for a person, it would allow for an ability to assess whether differences between
scenes that elicited a sense of familiarity and scenes that did not can be detected
by machine learning algorithms.
Although the study’s sample sizes were based on prior behavioral research
using these 2D and more immersive 3D VR methodologies [30], there was no
precedent for computing the needed sample size for eye gaze data from these
paradigms. Thus, the sample size may have been relatively small for eye gaze
data, particularly in the more immersive 3D VR experiment, which could affect
the generalizability of the findings. Further research could also explore the inte-
gration of other physiological measures, like heart rate or skin conductance [5],
to enrich the detection of cognitive states.
Finally, the largest question that looms from this work is the feasibility of
distinguishing distinct types of internal states. Ideally, an intelligent tutor will
be able to identify not only that a user is experiencing an internal state but
also determine what specific internal state is occurring, e.g., familiarity, curios-
ity, tip-of-the-tongue states, or mind wandering. Future research should aim to
18 I. Castillon et al.
6 Conclusion
In conclusion, this study demonstrates the potential of using eye tracking tech-
nology to detect a person’s subjective sense of familiarity, an important cognitive
state. While there are challenges to be addressed, the findings lay a foundation
for future research and practical applications in HCI and cognitive science.
7 Acknowledgements
This work was partially supported by the National Science Foundation under
award 2303019 and under subcontracts on award DRL 2019805. The views ex-
pressed are those of the authors and do not reflect the official policy or position
of the U.S. Government. The College of Natural Sciences at Colorado State Uni-
versity provided a grant to purchase the HTC Vive Pro Eye VR system and
Colorado State University’s Data Science Research Institute provided funding
for undergraduate researchers.
References
1. Vive eye and facial tracking sdk 1.3.6.8, https://developer.vive.com/resources/vive-
sense/eye-and-facial-tracking-sdk/download/latest/
2. Bergstra, J., Yamins, D., Cox, D.: Making a science of model search: Hy-
perparameter optimization in hundreds of dimensions for vision architectures.
In: Dasgupta, S., McAllester, D. (eds.) Proceedings of the 30th Interna-
tional Conference on Machine Learning. Proceedings of Machine Learning Re-
search, vol. 28, pp. 115–123. PMLR, Atlanta, Georgia, USA (17–19 Jun 2013),
https://proceedings.mlr.press/v28/bergstra13.html
3. Bixler, R., Blanchard, N., Garrison, L., D’Mello, S.: Automatic detection
of mind wandering during reading using gaze and physiology. In: ICMI
’15: Proceedings of the 2015 ACM on International Conference on Multi-
modal Interaction. pp. 299–306. Association for Computing Machinery (2015).
https://doi.org/https://doi.org/10.1145/2818346.2820742
4. Bixler, R., D’Mello, S.: Automatic gaze-based user-independent detection of mind
wandering during computerized reading. User Modeling and User-Adapted Interac-
tion 26, 33–68 (2015). https://doi.org/https://doi.org/10.1007/s11257-015-9167-1
5. Blanchard, N., Bixler, R., Joyce, T., D’Mello, S.: Automated physiological-based
detection of mind wandering during learning. In: Intelligent Tutoring Systems: 12th
International Conference, ITS 2014, Honolulu, HI, USA, June 5-9, 2014. Proceed-
ings 12. pp. 55–60. Springer International Publishing (2014)
6. Brown, A.S., Marsh, E.J.: evoking false beliefs about autobio-
graphical experience. Psychonomic Bulletin Review 15 (2008).
https://doi.org/https://doi.org/10.3758/PBR.15.1.186
7. Castillon, I., Krishnaswamy, N., Blanchard, N.: Multimodal features for group
dynamic-aware agents. In: Interdisciplinary Approaches to Getting AI Experts and
Education Stakeholders Talking Workshop at AIEd. (2022)
Automatically Identifying Familiarity using Eye Gaze Features 19
8. Christoff, K., Gordon, A.M., Smallwood, J., Smith, R., Schooler, J.W.:
Experience sampling during fmri reveals default network and execu-
tive system contributions to mind wandering. Proceedings of the Na-
tional Academy of Sciences of the Uinited States of America (2009).
https://doi.org/https://doi.org/10.1073/pnas.0900234106
9. Cleary, A.M.: Recognition memory, familiarity, and déjà vu expe-
riences. Current Directions in Psychological Science 17(5) (2008).
https://doi.org/https://doi.org/10.1111/j.1467-8721.2008.00605.x
10. Cleary, A.M., Brown, A.S., Sawyer, B.D., Nomi, J.S., Ajoku, A.C., Ryals, A.J.:
Familiarity from the configuration of objects in 3-dimensional space and its relation
to déjà vu: A virtual reality investigation. Consciousness and Cognition 21(2)
(2012). https://doi.org/https://doi.org/10.1016/j.concog.2011.12.010
11. Cleary, A.M., Claxton, A.B.: Déjà vu: An illusion of prediction. Psycholog-
ical Science 29(4), 635–644 (2018). https://doi.org/10.1177/0956797617743018,
https://doi.org/10.1177/0956797617743018, pMID: 29494276
12. Cleary, A.M., Irving, Z.C., Mills, C.: What flips attention? Cognitive Science 47(4),
e13274 (2023)
13. Cleary, A.M., Ryals, A.J., Nomi, J.S.: Can déjà vu result from similarity to a prior
experience? support for the similarity hypothesis of déjà vu. Psychonomic Bulletin
Review 16 (2009). https://doi.org/https://doi.org/10.3758/PBR.16.6.1082
14. Cohen, J.: A coefficient of agreement for nominal scales. Educational and psycho-
logical measurement 20(1), 37–46 (1960)
15. D’Mello, S., Cobian, J., Hunter, M.: Automatic gaze-based detection of mind wan-
dering during reading. In: Educational Data Mining 2013 (2013)
16. Donders, F.: On the speed of mental processes. Acta Psychologica 30,
412–431 (1969). https://doi.org/https://doi.org/10.1016/0001-6918(69)90065-1,
https://www.sciencedirect.com/science/article/pii/0001691869900651
17. Faber, M., Bixler, R., D’Mello, S.K.: An automated behavioral measure of mind
wandering during computerized reading. Behavior Research Methods 50, 134,150
(2017). https://doi.org/https://doi.org/10.3758/s13428-017-0857-y
18. George Nishimura, A.F.: Déjà vu: Classification of memory using eye movements
(2015)
19. Ghose, Upamanyu, r.A.A.B.W.P.X.H., Chng, E.S.: Pytrack: An end-to-end anal-
ysis toolkit for eye tracking. Behavior Research Methods 52(2588–2603) (2020).
https://doi.org/10.3758/s13428-020-01392-6, https://doi.org/10.3758/s13428-020-
01392-6
20. Hutt, S., Hardey, J., Bixler, R.E., Stewart, A.E.B., Risko, E.F., D’Mello, S.K.:
Gaze-based detection of mind wandering during lecture viewing. In: Educational
Data Mining (2017), https://api.semanticscholar.org/CorpusID:1144340
21. Hutt, S., Krasich, K., Mills, C., Bosch, N., White, S., Brockmole, J.R., D’Mello,
S.K.: Automated gaze-based mind wandering detection during computerized learn-
ing in classrooms. User Modeling and User-Adapted Interaction 29, 821–867
(2019). https://doi.org/https://doi.org/10.1007/s11257-019-09228-5
22. Hutt, S., Mills, C., White, S., Donnelly, P.J., D’Mello, S.K.: The eyes have it: Gaze-
based detection of mind wandering during learning with an intelligent tutoring
system. International Educational Data Mining Society (2016)
23. Kuvar, V., Blanchard, N., Colby, A., Allen, L., Mills, C.: Automatically
detecting task-unrelated thoughts during conversations using keystroke anal-
ysis. User Modeling and User-Adapted Interaction pp. 617–641 (2023).
https://doi.org/https://doi.org/10.1007/s11257-022-09340-z
20 I. Castillon et al.
24. Kuvar, V., Kam, J.W.Y., Hutt, S., Mills, C.: Detecting when the mind wanders
off task in real-time: An overview and systematic review. ICMI ’23: Proceedings of
the 25th International Conference on Multimodal Interaction pp. 163–173 (2023).
https://doi.org/https://doi.org/10.1145/3577190.3614126
25. Mandler, G.: Familiarity breeds attempts: A critical review of dual-process
theories of recognition. Perspectives on Psychological Science 3(5) (2008).
https://doi.org/https://doi.org/10.1111/j.1745-6924.2008.00087.x
26. McNeely-White, K.L., Cleary, A.M.: Piquing curiosity: Déjà vu-like states are as-
sociated with feelings of curiosity and information-seeking behaviors. Journal of
Intelligence 11(6), 112 (2023)
27. Metcalfe, J., Kennedy-Pyers, T., Vuorre, M.: Curiosity and the desire for agency:
wait, wait. . . don’t tell me! Cognitive Research: Principles and Implications 6, 1–8
(2021)
28. Mills, C., Bixler, R.E., Wang, X., D’Mello, S.K.: Automatic gaze-based detec-
tion of mind wandering during film viewing. In: Educational Data Mining (2016),
https://api.semanticscholar.org/CorpusID:33070209
29. Nishimura, G., Faisal, A.: Déjà vu: Classification of memory using eye movements
(2015)
30. Okada, N.S., McNeely-White, K.L., Cleary, A.M., Carlaw, B.N., Drane, D.L., Par-
sons, T.D., McMahan, T., Neisser, J., Pedersen, N.P.: A virtual reality paradigm
with dynamic scene stimuli for use in memory research. Behavior Research Meth-
ods pp. 1–24 (2023)
31. Oulasvirta, A., Kim, S., Lee, B.: Neuromechanics of a button press. In:
Proceedings of the 2018 CHI Conference on Human Factors in Com-
puting Systems. p. 1–13. CHI ’18, Association for Computing Machin-
ery, New York, NY, USA (2018). https://doi.org/10.1145/3173574.3174082,
https://doi.org/10.1145/3173574.3174082
32. Rowland, C.A.: The effect of testing versus restudy on retention: a meta-analytic
review of the testing effect. Psychological bulletin 140(6), 1432 (2014)
33. Ryals, A.J., Kelly, M.E., Cleary, A.M.: Increased pupil dilation during tip-of-the-
tongue states. Consciousness and Cognition 92, 103152 (2021)
34. Ryals, A.J., Wang, J.X., Polnaszek, K.L., Voss, J.L.: Hippocampal contribution to
implicit configuration memory expressed via eye movements during scene explo-
ration. Hippocampus 25(9), 1028–1041 (2015)
35. Schuetz, I., Fiehler, K.: Eye tracking in virtual reality: Vive pro eye spatial ac-
curacy, precision, and calibration reliability. Journal of Eye Movement Research
15(3) (2022)
36. Seabolt, L.K.: Eye’ve seen this before: Building a gaze data analysis tool for déjà
vu detection (2022)
37. Stewart, A., Bosch, N., Chen, H., Donnelly, P.J., D’Mello, S.K.: Where’s
your mind at? video-based mind wandering detection during film viewing.
In: Proceedings of the 2016 Conference on User Modeling Adaptation and
Personalization. p. 295–296. UMAP ’16, Association for Computing Machin-
ery, New York, NY, USA (2016). https://doi.org/10.1145/2930238.2930266,
https://doi.org/10.1145/2930238.2930266
38. Yu Imaoka, A.F., de Bruin, E.D.: Assessing saccadic eye movements
with head-mounted display virtual reality technology. Frontiers in Psy-
chiatry 11(572938) (2020). https://doi.org/10.3389/fpsyt.2020.572938,
https://doi.org/10.3389/fpsyt.2020.572938