0% found this document useful (0 votes)
50 views24 pages

Torres Et Al Final

This study examined whether greater visual working memory for real-world objects compared to simple features like colors is related to the memorability of objects. In three online experiments, the researchers found that real-world objects were better remembered than colors, with a higher proportion of high-confidence responses. Memory was also better for objects compared to scrambled versions of the same objects, indicating the benefit is related to semantic meaning rather than visual complexity. Critically, the specific objects that were most likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are inherently more memorable than others. Object memorability also predicted memory performance for objects within a display. These findings suggest the enhanced working memory for real-world objects

Uploaded by

Ana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views24 pages

Torres Et Al Final

This study examined whether greater visual working memory for real-world objects compared to simple features like colors is related to the memorability of objects. In three online experiments, the researchers found that real-world objects were better remembered than colors, with a higher proportion of high-confidence responses. Memory was also better for objects compared to scrambled versions of the same objects, indicating the benefit is related to semantic meaning rather than visual complexity. Critically, the specific objects that were most likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are inherently more memorable than others. Object memorability also predicted memory performance for objects within a display. These findings suggest the enhanced working memory for real-world objects

Uploaded by

Ana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Not all objects are created equal: greater visual working

memory for real-world objects is related to item memorability


Rosa E. Torres1, Mallory S. Dupree1, Naomi Molokwu2, Karen L. Campbell1,2, Stephen M.
Emrich1,2
1
Department of Psychology, Brock University
2
Centre for Neuroscience, Brock University

Funding: NSERC 2019-04865 (S.M.E.) and 2017-03804 (K. L. C.)

Data & Software Availability: Stimulus presentation software, analytic software, and raw data
associated with this project can be accessed at: https://osf.io/n65j9/

Acknowledgements: Would like to thank Justin Wang for helping with task programming and
Chris Keightley and Chae Lynn Bush for helping with stimuli selection.
Abstract

Visual working memory is thought to have a fixed capacity limit. However, recent evidence
suggests that capacity is greater for real-world objects compared to simple features (i.e.,
colors). Here, we examined whether greater working memory for objects was due to greater
memorability. In online samples of young adults, real-world objects were better remembered
than colors, which was attributed to a higher proportion of high-confidence responses (Exp 1).
Memory performance for objects was also improved compared to their scrambled counterparts
(Exp 2), indicating that this benefit is unrelated to visual complexity. Critically, the specific
objects that were likely to be remembered with high confidence were highly correlated across
experiments, consistent with the idea that some objects are more memorable than others.
Object memorability also predicted memory performance between objects within a display (Exp
3). These findings suggest that the object benefit in working memory may be supported by
stimulus memorability.
Not all objects are created equal: greater visual working memory for real-world objects is
related to item memorability

Two core questions underlying the study of human memory are: 1) how much information can
be stored and 2) what effect different memoranda have on memory performance. These questions have
been examined in studies of both long-term and working (or short-term) memory, which have revealed
important distinctions between these two systems. With respect to long-term memory, studies have
demonstrated that humans have a seemingly unlimited capacity (e.g., Brady et al., 2008); however, not
all information is equally likely to be remembered. For example, words that are processed in relation to
their meaning (i.e., deeply encoded) are better remembered than those processed in relation to their
perceptual features (Craik & Tulving, 1975). Recent studies have also demonstrated that some stimuli
are intrinsically more memorable than others. That is, in studies of visual objects and faces, some items
are consistently more likely to be remembered than others (Bainbridge et al., 2013; Wakeland-Hart et
al., 2022), a property referred to as memorability.

In contrast to long-term memory, working memory (WM) has a much more limited capacity
than long-term memory, though the nature of its capacity limit, and the effect of memoranda on WM
performance, remain in question. In visual WM for example, some prominent models suggest that WM
has a fixed upper-limit on the number of items that can be stored (Awh et al., 2007; Cowan, 2001; Luck
& Vogel, 1997), with estimates of WM capacity around 3 – 4 items. These models also point to evidence
that capacity estimates are similar regardless of whether the to-be-remembered stimuli are simple
features (e.g., color, orientation) or combination of features (Vogel et al., 2001) as evidence for a fixed
upper-limit on the number of items that can be stored in visual WM.

Other models, however, suggest that WM may not have a fixed upper limit, but may instead
consist of a resource that can be flexibly allocated according to the number of items (Bays & Husain,
2008; Wilken & Ma, 2004), their individual priority (Dube et al., 2017; Emrich et al., 2017), and perhaps
even the type of stimuli (Alvarez & Cavanagh, 2004). Recently, several studies have demonstrated that
the number of real-world objects that can be stored in visual WM is greater than the number of simple
features (Brady & Störmer, 2020). This increased WM for real-world objects is also reflected in a greater
electrophysiological indicator of WM storage (Asp et al., 2021; Brady et al., 2016), suggesting that
greater memory performance is supported by the recruitment of additional neural resources, perhaps
consistent with the recruitment of additional ventral stream regions responsible for object
representation (Stojanoski et al., 2019; for a review, see Clarke & Tyler, 2015).
One possible explanation for the object benefit may be that memory for real-world objects may
rely on different memory processes than simple features, such as memorability. Indeed, even with
simple features such as colors or orientations, some features are more likely to be remembered than
others (Bae et al., 2015; Pratte et al., 2017). This increased memorability of some objects may not only
lead to an overall memory benefit, but may also result in a qualitatively distinct kind of memory process
– e.g., knowing that you saw a bulldog rather than recognizing a feature with some amount of error
(e.g., red-ish)– in part due to the increased availability of semantic information. One way to test this is
by having participants rate their confidence in their recognition judgements, which allows for estimates
of recollection and familiarity to be calculated (Jacoby et al., 1989; Tulving, 1985; Yonelinas, 1994). If
memorability contributes to the object benefit, real-world objects should be recollected more often
than simple features, and also exhibit a greater proportion of high-confidence responses. If this greater
memory performance is due to the presence of semantic meaning in real-world objects, then removing
semantic information while leaving visual complexity intact (e.g., through scrambling) should disrupt
that recognizability of real-world objects. Moreover, if objects vary in their memorability, then accuracy
for a given object should be correlated across independent groups of participants, and should also
predict which items are most likely to be recalled within a memory array.

In the current study, we examined WM performance for real-world objects compared to simple
features (Exp 1) and scrambled objects (Exp 2). Memory performance was examined using both high-
confidence responses, as well as estimates of recollection and familiarity from the dual-process signal
detection (DPSD) model (Yonelinas, 1994). We examined this model with the prediction that the greater
memorability of real-world objects would result in a greater proportion of recollection-like responses
than there would be for colors,, as a previous study observed that recollection responses were related to
memorability (Bainbridge & Rissman, 2018). Across both experiments, recollection was higher for real-
world objects, and the objects that were most likely to be recognized with high-confidence (i.e., most
memorable) were highly correlated across experiments. We also examined whether this measure of
memorability affected performance within a given display (Exp 3), with object memorability predicting
the relative success of recognition for items within a display. These results suggest improved WM
performance for real-world objects may be attributed in part to the greater memorability of some
objects.

Methods
Some of the methods and analyses of the present study were pre-registered at
https://aspredicted.org/blind.php?x=j5kv3v. All procedures were approved by Brock University’s
Research Ethics Board.

Experiment 1

Participants

All participants were recruited via Prolific, an online sample tool, and were based in the United
States and Canada. Participants were 18-30, fluent in English and reported normal vision. A total of 50
participants’ data was in the final data set for Experiment 1 (42 females, 1 non-binary; Mage = 22.82) and
2 (34 females; ages 18 to 30, Mage = 24.38). Sample size was based on the effect sizes of 0.57 observed in
Brady and Störmer (2020; Experiment 3), and increased to a sample of 50 to account for potential
differences in measures between the two tasks. The data of four participants was replaced in
Experiment 1 and six participants in Experiment 2. These participants were excluded due to failed
attention checks.

Stimuli and procedure

The procedure was adapted from Experiment 3 of Brady and Störmer (2020) to include a 6-point
confidence scale (Figure 1). The task consisted of 175 trials (15 of which were practice trials). In the
object condition, participants were presented with six real-world objects, equally spaced around a
central fixation on a white background for 2,000 ms. One of the six objects was randomly selected from
the 120 object pairs from Brady and Störmer (2020), which were determined to be maximally dissimilar
in object space. After a 700 ms delay period, a probe object was presented centrally, along with a cue
highlighting the probed location. When the test item was old (50% of trials), the original sample item
located in the cued location was presented. On the remaining 50% of trials, a novel object was
presented (i.e., the maximally dissimilar pair object). Participants responded using a 6-point confidence
scale to report whether the test item was old or new, as well as their confidence level for this response
(sure, probably, guess).

The same procedure was used for the color condition, with colored circles in place of real-world
objects. On change trials, the new color was 180° from the original color (see Brady & Störmer, 2020).

Analysis
Performance was assessed in two ways. First, data were fit to receiver operating characteristic
(ROC) curves using the ROC Toolbox in MATLAB (Koen et al., 2017). ROCs were fit to the cumulative hit
and false alarm rates obtained from the confidence ratings for each trial, separately for each condition.
This provided a measure of overall memory performance (d-prime). ROC curves were also fit to the dual-
process signal detection (DPSD) model minimizing log likelihood (Heathcote et al., 2006; Yonelinas et al.,
2010a). The DPSD model is based on dual-process models of long-term memory, and divides
performance into recollection (Ro), a measure of the qualitative, all-or-none recall of an old item, and
familiarity (F), a quantitative measure associated with the strength of memory for old items, separate
from recollection. Second, to examine a model-free test of memorability, we also examined raw
responses – i.e., the proportion of hits (high-confidence “old” responses to old items) minus the
proportion of false alarms (high-confidence “old” responses to new lures), as well as a similar metric for
“probably” responses. Paired t-tests were performed across all measures using JASP (JASP Team, 2020).

Experiment 2

All procedures and analyses were identical to Experiment 1, with the following exceptions: in
place of the color condition, subjects were presented with a Scrambled condition, which contained
scrambled versions of the same objects used in the real-world objection condition. The objects were
scrambled using a dipheomorphic procedure which renders the objects unrecognizable, while
maintaining low-level stimulus complexity (Stojanoski & Cusack, 2014). Objects were scrambled to
31.25% of the maximum (25/80), exceeding the threshold for recognizability (Stojanoski & Cusack,
2014). The warping procedure produces two versions of the warped objects (contracted vs expanded); if
the warping procedure created non-contiguous sections, the least distorted version was selected,
otherwise the versions were selected pseudo-randomly. The target and lure stimuli were selected using
the same procedure as the real-world object conditions.
Figure 1. Schematic of the visual working memory tasks with different stimulus types. Participants were
presented with 6 distinct real-world objects (A; in Experiments 1 and 2), colors (B; in Experiment 1) or
scrambled objects (C; in Experiment 2) and asked to remember all items over a delay. During the test
screen, participants were presented with a probe in the middle of the screen and they had to indicate
whether it was the same or different (i.e., old or new) as the item that was originally displayed at the
cued location using a 6-point confidence scale (D; sure/probably/guess).

Memorability Analysis

To examine the memorability of each object, we examined the high-confidence responses for all
150 objects separately for Experiments 1 and 2. That is, high-confidence hits and false alarms were
aggregated across all 50 subjects from each experiment separately for all 150 objects. This analysis was
restricted to the first presentation of a given object. This created an accuracy score for each object. To
examine memorability (i.e., the reliability of being remembered across participants), these scores were
then correlated across experiments using Spearman’s Rho.

Experiment 3

Participants

The sample consisted of a total of 147 participants 18-30 (90 females, 6 other, Mage = 24.13),
split into six groups of 25, with the exception of one group consisting of 22 participants. No subjects
were removed.
Stimuli and Procedures

Participants were presented with six real-world objects at a time, which were taken from the 120
maximally dissimilar pairs (Brady & Störmer, 2020), creating 20 unique displays. That is, each participant
saw the identical 20 displays containing the same six objects. However, participants were divided into six
groups, with each group being tested on one specific object in each of 20 the unique image display
combinations. This testing method allowed memory performance for each object within each display to
be assessed, without repeating the presented displays or objects. The six items were presented equally
distanced from the centre of the display for 2000 ms, followed by a delay of 700 ms. A two-alternative-
force-choice (2AFC) was presented until response, in which participants had to choose between the
picture originally presented in the display screen and the maximally dissimilar paired foil. Each
participant completed 3 practice trials before the experiment instructions were repeated.

Analysis

Average accuracy for each object (i.e., each group) was calculated separately for each display.
To identify whether the most memorable objects in a given display were most likely to be remembered
correctly, the two most memorable and two least memorable objects for each of the 20 displays were
identified using the average accuracy from Experiments 1 and 2. A difference score between the most
and least memorable objects was calculated, and then averaged across the 20 images. This value was
then compared non-parametrically to a randomly permuted value from 1000 random pairs of objects
from each display.

Results

Experiment 1

Participants had better memory sensitivity (as indicated by d-prime) for real-world objects (M =
2.19, SD = 0.65) compared to colors (M = 1.81, SD = 1.20), t(49) = 2.662, p = 0.010, d = 0.376 (Figure 2;
Figure 3A). To further explore this effect, and to examine whether memory for real-world objects was
qualitatively distinct from colors, we fit ROCs to the DPSD model to compare the relative contributions
of recollection and familiarity. Paired samples t-tests revealed that participants were able to recollect
old (Ro) test items significantly more in the real-world object condition (M = 0.32, SD = 0.20) than in the
color condition (M = 0.13, SD = 0.16), t(49) = 6.419, p < .001, d = 0.908 (Figure 3B). Similarly, participants
were able to use familiarity significantly better in the real-world object condition (M = 1.58, SD = 0.53)
than in the color condition (M = 1.28, SD = 0.84), t(49) = 2.467, p = 0.017, d = 0.349 (Figure 3C). Using a
model-free approach revealed a higher proportion of corrected high-confidence “old” responses (see
methods) in the object (M = 0.43, SD = 0.16 ) than color (M = 0.22, SD = 0.17) condition, t(49) = 8.980, p
< .001, d = 1.270. By contrast, more “probably” old responses were made in the color condition (M =
0.21, SD = 0.13), compared to the object condition (M = 0.17, SD = 0.09), t(49) = 2.179, p = .034, d =
0.308. No differences were observed in the high confidence “new” responses, t(49) = -0.856, p=.396.

O Object
O Color

Figure 2. Group-average cumulative receiver-operating characteristics (ROC) curves from the dual-
process signal detection model. The real-world objects (in blue) and color (in orange) conditions of
group-averaged performance data of 50 participants.

Overall, the results of Experiment 1 replicate previous findings (Brady & Störmer, 2020) demonstrating
greater working memory for real-world objects compared to colors. By using a 6-point confidence scale
and examining the DPSD model, we also demonstrate that this effect is attributable to an increase in
recollection for objects, often attributed to a qualitative “episodic” memory process (for a review, see
Yonelinas et al., 2010b), in addition to a greater signal strength (familiarity). Subjects showed
significantly more high-confidence “old” responses for objects compared to colors, but that was not true
for less confident old responses, or for high-confidence “new” responses. These results suggest that
while there is an object benefit in working memory, this effect is driven primarily by high-confidence
“old” responses, suggesting that objects are perhaps more memorable than simple features, even when
controlling for target-lure similarity.

Experiment 2

While Experiment 1 replicated the object benefit in working memory, and demonstrated that
participants may show greater recollection of objects compared to colors, there are two potential
confounds that limit the conclusions of this experiment. First, it is not clear which aspect of real-world
objects contribute to this benefit. That is, not only do objects contain more semantic information than
colors, but they also are more visually complex. Both complexity and semantics are factors that have
been shown to affect working memory performance (Eng et al., 2005; Hu & Jacobs, 2021; O’Donnell et
al., 2018). Thus, it is possible that the benefit could be due to the increased visual complexity of the
objects, rather than the semantic information present in those stimuli. Second, object stimuli were
limited to a relatively small number of items; by comparison, the color stimuli could be pulled from a
larger pool of potential features, all of which have a great deal of featural overlap. Thus, the difference
in low-level stimulus properties could have resulted in the observed difference in Experiment 1.

Consequently, in Experiment 2, we tested whether working memory performance is better for


real objects compared to their scrambled counterparts. We used the dipheomorphic warping technique
described by Stojanoski and Cusack (2014), which, unlike other scrambling approaches, results in similar
levels of low-level neural responses, while removing the recognizability of objects (see Figure 1C). Thus,
we were able to test whether the object benefit was due to semantic information alone, independent of
visual complexity. Moreover, because both conditions were based on the same set of stimuli, the
conditions had equal amounts of stimulus complexity overlap.

Results showed that participants had better memory sensitivity (d-prime) for real-world objects
(M = 2.34, SD = 0.97) compared to scrambled objects (M = 1.41, SD = 0.65), t(49)=7.531, p < .001, d =
1.065 (Figure 3D). Examining the parameters of the DPSD model revealed that participants were able to
recollect significantly more old (Ro) real-world objects (M = 0.36, SD = 0.17) compared to the scrambled
objects (M = 0.20, SD = 0.13), t(49)=6.456, p<.001, d = 0.913 (Figure 3E). However, there was no
significant difference in how participants use familiarity in the real-world object condition (M = 1.94, SD
= 5.25) compared to in the color condition (M = 0.90, SD = 0.64), t(49) = 1.551, p = 0.173, d = 0.219
(Figure 3F). Using a model-free approach revealed a higher proportion of corrected high-confidence
“old” responses for real-world objects (M = 0.46, SD = 0.16) compared to the scrambled objects
condition (M = 0.28, SD = 0.16), t(49) = 10.598, p < .001. By contrast, no differences were observed in
the “probably” old, t(49) = 1.565, p = .124. Additionally, there was a higher proportion of high
confidence “new” responses in the scrambled objects condition (M = -0.23, SD = 0.18) compared to real-
world objects (M = -0.53, SD = 0.20), t(49) = 9.818, p < .001. These results indicate that the real-world
object benefit persists compared to visually-matched stimuli, and that greater recollection of objects is
driven primarily by semantic information (see Supplemental Results for comparisons of Exp 1 and 2).
Figure 3. Overall measure of sensitivity (D-prime) for Experiment 1 (A) and Experiment 2 (D). As well as
recollection measures for Experiment 1 (B) and Experiment 2 (E), and familiarity measures for
Experiment 1 (C) and Experiment 2 (F) from the dual-process signal detection model. *p < .05,
***p < .001
Memorability Analysis

The results of Experiments 1 and 2 suggest that greater working memory performance for real
world objects may be due to greater memorability, as indicated by the greater rates of recollection and
high-confidence responses compared to colors or scrambled objects. It is possible, however, that not all
objects contribute to this effect; that is, past studies have demonstrated that some objects and faces are
highly memorable across individuals, whereas others are more “forgettable” (Bainbridge et al., 2013;
Wakeland-Hart et al., 2022).

To test this effect, we examined corrected-recognition scores for all 240 objects from
Experiment 1 and correlated them with the recognition scores from Experiment 2. This analysis revealed
a significant correlation in the likelihood of being remembered across experiments, Spearman’s ρ = .573,
p < .01 (Figure 4). In other words, objects that were most likely to be remembered in Experiment 1
showed similar recognition in Experiment 2, suggesting that some objects are more memorable than
others. Indeed, examining which objects were most likely to be remembered revealed that the most
memorable objects were animals (9 of the top 10), consistent with previous work showing that
categories like animals are most likely to be remembered in long-term memory (Kramer et al., 2022). By
contrast, the least memorable objects tended to be man-made objects that did not belong to a clear
category (see Supplemental Table 1 for a full list of the objects).
Figure 4. Correlation between real-world objects’ high confidence ratings across Experiments 1 and 2.
Spearman’s ρ = .573, p < .01.

Experiment 3

The results of Experiments 1 and 2, as well as the memorability analysis, suggest that working
memory performance for real-world objects is improved relative to memory for simple features (or
complex stimuli lacking semantic information) because some objects are simply more memorable,
resulting in more high-confidence, episodic-like recognition of those objects. If true, this would suggest
that not all objects in a given display would be remembered equally well – that is, within a display of six
objects, more memorable objects should be more likely to be recalled, consistent with the proposal that
memorability, not objectness per se, leads to improved working memory performance.

To examine this question, Experiment 3 used a two-alternative forced choice task (similar to
Brady and Stormer, 2020), in which the configuration of the sample displays remained constant across
participants, and six different groups of participants were each tested separately on one of the six items
in the display. This resulted in a recognition score for each of the six items in each display, which could
be compared based on the memorability of each item. That is, we calculated a recognition difference
score between the most-recognizable and least-recognizable objects for all 20 displays, where
recognizability was determined using the data from Experiments 1 and 2. We then compared this mean
difference (Δ = .2042) to a permutation of 10000 differences between any random two items in the
same display. The resulting 95% confidence interval of this distribution was -0.2 to-.1869, suggesting
that the memory performance between the most- and least-memorable objects in a display was
significantly greater than chance. In other words, the fact that memorability varies across objects can
predict performance between items within a display.

Discussion

In the current study, we investigated the potential mechanisms underlying the object benefit in
visual working memory. Specifically, in the first two experiments we used a six point confidence rating,
along with the DPSD model, to examine whether greater memory performance for objects could be
attributed to greater recollection (or high-confidence ‘old’ responses) for objects compared to other
stimuli. In Experiment 1, we replicated the findings of (Brady & Störmer, 2020) demonstrating better
overall memory performance for real-world objects compared to colors (as measured with d-prime).
Importantly, real-world objects also produced a greater proportion of high-confidence old responses,
and greater estimates of recollection, suggesting that objects are more memorable than colors. In
Experiment 2, we found that there is also a real-world object benefit when compared to their scrambled
counterparts, along with greater measures of recollection. Thus, the object benefit cannot be attributed
to differences in visual complexity, but rather appears to be driven by the recognizability of intact
objects.

Critically, we also observed a strong correlation between the most recognized objects across the
first two experiments, indicating that in independent samples, the objects most likely to be correctly
remembered remained reliable. This is consistent with a number of recent studies in long-term memory,
which indicate that certain stimuli (e.g., photographs, faces) are highly memorable, and are consistently
remembered across individuals, whereas other stimuli are much more forgettable (Bainbridge et al.,
2013; Wakeland-Hart et al., 2022). Our results indicate that the property of memorability also extends
to visual working memory – at least for real-world objects. That is, contrary to models of working
memory that suggest that memory capacity is fixed, regardless of the stimuli, our results suggest that
memory “capacity” depends greatly on the intrinsic properties of the to-be-remembered stimulus.
Importantly, memorability has been found to be independent of other factors such as attention and
priming (Bainbridge, 2020), suggesting that this effect may be primarily related to the stimulus
properties.

Interestingly, our results suggest that some items from certain categories are more memorable
than others. For example, several of the most memorable objects in our study were animals (e.g., dogs,
buffalos, zebras). This is consistent with recent findings that which examined the properties that made
certain images more memorable than others. They demonstrated that images with certain features,
such as body parts and animals, were much more memorable than images with other features, such as
metal objects and tools (Kramer et al., 2022). Analyzing the specific features of the images also revealed
that these effects were driven primarily by semantic information rather than visual details. This is
consistent with our observation that working memory performance was better for intact compared to
scrambled objects, suggesting semantic information, rather than the visual features, may be driving the
object benefit in visual working memory.

What neural mechanisms might underlie the memorability-driven object benefit in visual
working memory? Previous studies have observed a greater contralateral delay activity – an event-
related potential associated with working memory maintenance – in response to real-world objects and
ambiguous objects observed as meaningful (Asp et al., 2021; Brady & Störmer, 2020; Quirk et al., 2020).
It is unclear, however, whether this effect is driven by memorability per se, or by other factors (e.g.,
attentional prioritization; Emrich et al., 2022; Salahub et al., 2019). Alternatively, the consistency of
findings between our study and those investigating memorability in long-term memory suggest that
they may depend on similar processes. Specifically, neuroimaging studies suggest that memorability
may be associated with activation of later perceptual regions of the ventral visual stream, along with the
anterior temporal lobes (Bainbridge et al., 2017; Martin et al., 2018; Xie et al., 2020). This is consistent
with greater recruitment of ventral visual areas observed during working memory encoding for real-
world objects compared to their scrambled versions (Stojanoski & Cusack, 2014). The activation across a
wider set of ventral stream regions may result in more distinguishable patters across memoranda, and
as a result, a more robust memory signal.

More generally, the contribution of memorability and semantic information to working memory
performance for real world objects suggests long-term memory representations may contribute to
working memory (Brady & Störmer, 2022; Bruning & Lewis-Peacock, 2020; Cowan, 2001; Liu et al., 2020;
Xie et al., 2023). Recent research has suggested that working memory accesses visual long-term memory
when objects are more familiar and perceptual interference Is low (Schurgin et al., 2018). Indeed, long-
term memory representations may help support working memory even for simple stimuli, such as by
categorizing colors (Bae et al., 2015). Thus even simple stimuli can rely on representations from long-
term memory. Behaviourally, visual working memory and long-term memory also have similar levels of
fidelity for real-world objects. Specifically, both memory systems can store information of real-world
objects to high, and comparable, degrees of precision (Brady et al., 2013), especially after repeated
encoding opportunities for items in long-term memory tasks (Miner et al., 2020). Moreover, VWM
performance predicts subsequent LTM performance, suggesting that VWM may limit the encoding of
LTM (Fukuda & Vogel, 2019). Interestingly, the relationship between VWM and LTM is predicted by
stimulus memorability (Gillies et al., 2022), although the direction of this relationship remains unclear.
Regardless, the results speak to the important bi-directional relationship between these memory
systems.

In Experiment 3, we also demonstrated that object memorability is predictive of performance


relative to other objects within a display. That is, items that are most memorable in absolute terms are
more likely to be remembered relative to other items in the same display, consistent with one recent
observation (Gillies et al., 2022). This perhaps suggests that memorability may influence VWM at the
encoding stage, consistent with neuroimaging results (Stojanoski et al., 2019). Moreover, it also suggests
that controlling for memorability within a display may be as critical to observing the object benefit as
encoding duration or target-lure similarity (Brady & Störmer, 2022). In other words, randomly sampling
objects may under- or over-estimate memory performance for objects, if memorability is not
considered.

In conclusion, we replicated the object benefit in working memory, and determined that this
object benefit also exists relative to perceptually rich but meaningless items (i.e., scrambled objects),
suggesting a critical role for semantic information. Critically, we demonstrated that the object benefit
may be due to object memorability, as some objects were more likely to be remembered than others.
The reliance on semantic information as opposed to visual features, as well as the memorability of some
objects over others, suggests that the object benefit may depend on the recruitment of late-perceptual
and long-term memory systems in the temporal lobe. These findings are contrary to models which
suggest that visual working memory capacity is fixed, and instead suggests that the recruitment of
additional long-term memory resources may help to support WM.
References

Alvarez, G. A., & Cavanagh, P. (2004). The Capacity of Visual Short-Term Memory is Set Both by Visual

Information Load and by Number of Objects. Psychological Science, 15(2), 106–111.

https://doi.org/10.1111/j.0963-7214.2004.01502006.x

Asp, I. E., Störmer, V. S., & Brady, T. F. (2021). Greater Visual Working Memory Capacity for Visually

Matched Stimuli When They Are Perceived as Meaningful. Journal of Cognitive Neuroscience,

33(5), 902–918. https://doi.org/10.1162/jocn_a_01693

Awh, E., Barton, B., & Vogel, E. K. (2007). Visual Working Memory Represents a Fixed Number of Items

Regardless of Complexity. Psychological Science, 18(7), 622–628.

https://doi.org/10.1111/j.1467-9280.2007.01949.x

Bae, G.-Y., Olkkonen, M., Allred, S. R., & Flombaum, J. I. (2015). Why some colors appear more

memorable than others: A model combining categories and particulars in color working

memory. Journal of Experimental Psychology: General, 144(4), 744–763.

https://doi.org/10.1037/xge0000076

Bainbridge, W. A. (2020). The resiliency of image memorability: A predictor of memory separate from

attention and priming. Neuropsychologia, 141, 107408.

https://doi.org/10.1016/j.neuropsychologia.2020.107408

Bainbridge, W. A., Dilks, D. D., & Oliva, A. (2017). Memorability: A stimulus-driven perceptual neural

signature distinctive from memory. NeuroImage, 149, 141–152.

https://doi.org/10.1016/j.neuroimage.2017.01.063

Bainbridge, W. A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face photographs. Journal of

Experimental Psychology: General, 142(4), 1323–1334. https://doi.org/10.1037/a0033872


Bainbridge, W. A., & Rissman, J. (2018). Dissociating neural markers of stimulus memorability and

subjective recognition during episodic retrieval. Scientific Reports, 8(1), 8679.

https://doi.org/10.1038/s41598-018-26467-5

Bays, P. M., & Husain, M. (2008). Dynamic Shifts of Limited Working Memory Resources in Human

Vision. Science, 321(5890), 851–854. https://doi.org/10.1126/science.1158023

Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual long-term memory has a massive storage

capacity for object details. Proceedings of the National Academy of Sciences, 105(38), 14325–

14329. https://doi.org/10.1073/pnas.0803390105

Brady, T. F., Konkle, T., Gill, J., Oliva, A., & Alvarez, G. A. (2013). Visual Long-Term Memory Has the Same

Limit on Fidelity as Visual Working Memory. Psychological Science, 24(6), 981–990.

https://doi.org/10.1177/0956797612465439

Brady, T. F., & Störmer, V. S. (2020). Comparing memory capacity across stimuli requires maximally

dissimilar foils: Using deep convolutional neural networks to understand visual working memory

capacity for real-world objects [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/25t76

Brady, T. F., & Störmer, V. S. (2022). The role of meaning in visual working memory: Real-world objects,

but not simple features, benefit from deeper processing. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 48(7), 942–958. https://doi.org/10.1037/xlm0001014

Brady, T. F., Störmer, V. S., & Alvarez, G. A. (2016). Working memory is not fixed-capacity: More active

storage capacity for real-world objects than for simple stimuli. Proceedings of the National

Academy of Sciences, 113(27), 7459–7464. https://doi.org/10.1073/pnas.1520027113

Bruning, A. L., & Lewis-Peacock, J. A. (2020). Long-term memory guides resource allocation in working

memory. Scientific Reports, 10(1), 22161. https://doi.org/10.1038/s41598-020-79108-1

Clarke, A., & Tyler, L. K. (2015). Understanding What We See: How We Derive Meaning From Vision.

Trends in Cognitive Sciences, 19(11), 677–687. https://doi.org/10.1016/j.tics.2015.08.008


Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage

capacity. Behavioral and Brain Sciences, 24(1), 87–114.

https://doi.org/10.1017/S0140525X01003922

Craik, F. I. M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory.

Journal of Experimental Psychology: General, 104(3), 268–294. https://doi.org/10.1037/0096-

3445.104.3.268

Dube, B., Emrich, S. M., & Al-Aidroos, N. (2017). More than a filter: Feature-based attention regulates

the distribution of visual working memory resources. Journal of Experimental Psychology:

Human Perception and Performance, 43(10), 1843–1854. https://doi.org/10.1037/xhp0000428

Emrich, S. M., Lockhart, H. A., & Al-Aidroos, N. (2017). Attention mediates the flexible allocation of visual

working memory resources. Journal of Experimental Psychology: Human Perception and

Performance, 43(7), 1454–1465. https://doi.org/10.1037/xhp0000398

Emrich, S. M., Salahub, C., & Katus, T. (2022). Sensory Delay Activity: More than an Electrophysiological

Index of Working Memory Load. Journal of Cognitive Neuroscience, 35(1), 135–148.

https://doi.org/10.1162/jocn_a_01922

Eng, H. Y., Chen, D., & Jiang, Y. (2005). Visual working memory for simple and complex visual stimuli.

Psychonomic Bulletin & Review, 12(6), 1127–1133. https://doi.org/10.3758/BF03206454

Fukuda, K., & Vogel, E. K. (2019). Visual short-term memory capacity predicts the “bandwidth” of visual

long-term memory encoding. Memory & Cognition, 47(8), 1481–1497.

https://doi.org/10.3758/s13421-019-00954-0

Gillies, G., Park, H. G., Woo, J., Bernhardt-Walther, D., Cant, J., & Fukuda, K. (2022). Tracing the

Emergence of the Memorability Benefit [Preprint]. PsyArXiv.

https://doi.org/10.31234/osf.io/bx69s
Heathcote, A., Raymond, F., & Dunn, J. (2006). Recollection and familiarity in recognition memory:

Evidence from ROC curves. Journal of Memory and Language, 55(4), 495–514.

https://doi.org/10.1016/j.jml.2006.07.001

Hu, R., & Jacobs, R. A. (2021). Semantic influence on visual working memory of object identity and

location. Cognition, 217, 104891. https://doi.org/10.1016/j.cognition.2021.104891

Jacoby, L. L., Kelley, C. M., & Dywan, J. (1989). Memory Attributions. In Varieties of Memory and

Consciousness (1st ed., pp. 391–422). Psychology Press.

https://doi.org/10.4324/9781315801841

Koen, J. D., Barrett, F. S., Harlow, I. M., & Yonelinas, A. P. (2017). The ROC Toolbox: A toolbox for

analyzing receiver-operating characteristics derived from confidence ratings. Behavior Research

Methods, 49(4), 1399–1406. https://doi.org/10.3758/s13428-016-0796-z

Kramer, M. A., Hebart, M. N., Baker, C. I., & Bainbridge, W. A. (2022). The Features Underlying the

Memorability of Objects [Preprint]. Neuroscience. https://doi.org/10.1101/2022.04.29.490104

Liu, J., Zhang, H., Yu, T., Ni, D., Ren, L., Yang, Q., Lu, B., Wang, D., Heinen, R., Axmacher, N., & Xue, G.

(2020). Stable maintenance of multiple representational formats in human visual short-term

memory. Proceedings of the National Academy of Sciences, 117(51), 32329–32339.

https://doi.org/10.1073/pnas.2006752117

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions.

Nature, 390(6657), 279–281. https://doi.org/10.1038/36846

Martin, C. B., Douglas, D., Newsome, R. N., Man, L. L., & Barense, M. D. (2018). Integrative and

distinctive coding of visual and conceptual object features in the ventral visual stream. ELife, 7,

e31873. https://doi.org/10.7554/eLife.31873

Miner, A. E., Schurgin, M. W., & Brady, T. F. (2020). Is working memory inherently more “precise” than

long-term memory? Extremely high fidelity visual long-term memories for frequently
encountered objects. Journal of Experimental Psychology: Human Perception and Performance,

46(8), 813–830. https://doi.org/10.1037/xhp0000748

O’Donnell, R. E., Clement, A., & Brockmole, J. R. (2018). Semantic and functional relationships among

objects increase the capacity of visual working memory. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 44(7), 1151–1158. https://doi.org/10.1037/xlm0000508

Pratte, M. S., Park, Y. E., Rademaker, R. L., & Tong, F. (2017). Accounting for stimulus-specific variation in

precision reveals a discrete capacity limit in visual working memory. Journal of Experimental

Psychology: Human Perception and Performance, 43(1), 6–17.

https://doi.org/10.1037/xhp0000302

Quirk, C., Adam, K. C. S., & Vogel, E. K. (2020). No Evidence for an Object Working Memory Capacity

Benefit with Extended Viewing Time. Eneuro, 7(5), ENEURO.0150-20.2020.

https://doi.org/10.1523/ENEURO.0150-20.2020

Salahub, C., Lockhart, H. A., Dube, B., Al-Aidroos, N., & Emrich, S. M. (2019). Electrophysiological

correlates of the flexible allocation of visual working memory resources. Scientific Reports, 9(1),

19428. https://doi.org/10.1038/s41598-019-55948-4

Schurgin, M. W., Cunningham, C. A., Egeth, H. E., & Brady, T. F. (2018). Visual Long-term Memory Can

Replace Active Maintenance in Visual Working Memory [Preprint]. Neuroscience.

https://doi.org/10.1101/381848

Stojanoski, B., & Cusack, R. (2014). Time to wave good-bye to phase scrambling: Creating controlled

scrambled images using diffeomorphic transformations. Journal of Vision, 14(12), 6–6.

https://doi.org/10.1167/14.12.6

Stojanoski, B., Emrich, S. M., & Cusack, R. (2019). Representation of semantic information in ventral

areas during encoding is associated with improved visual short-term memory [Preprint].

Neuroscience. https://doi.org/10.1101/2019.12.13.875542
Tulving, E. (1985). In Memory and Consciousness (1st ed., Vol. 26, pp. 1–12). Canadian Psychology /

Psychologie canadienne. https://doi.org/10.1037/h0080017

Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of features, conjunctions, and objects in visual

working memory. Journal of Experimental Psychology: Human Perception and Performance,

27(1), 92–114. https://doi.org/10.1037/0096-1523.27.1.92

Wakeland-Hart, C. D., Cao, S. A., deBettencourt, M. T., Bainbridge, W. A., & Rosenberg, M. D. (2022).

Predicting visual memory across images and within individuals. Cognition, 227, 105201.

https://doi.org/10.1016/j.cognition.2022.105201

Wilken, P., & Ma, W. J. (2004). A detection theory account of change detection. Journal of Vision, 4(12),

11. https://doi.org/10.1167/4.12.11

Xie, W., Bainbridge, W. A., Inati, S. K., Baker, C. I., & Zaghloul, K. A. (2020). Memorability of words in

arbitrary verbal associations modulates memory retrieval in the anterior temporal lobe. Nature

Human Behaviour, 4(9), 937–948. https://doi.org/10.1038/s41562-020-0901-2

Xie, W., Chapeton, J. I., Bhasin, S., Zawora, C., Wittig, J. H., Inati, S. K., Zhang, W., & Zaghloul, K. A.

(2023). The medial temporal lobe supports the quality of visual short-term memory

representation. Nature Human Behaviour. https://doi.org/10.1038/s41562-023-01529-5

Yonelinas, A. P. (1994). Receiver-operating characteristics in recognition memory: Evidence for a dual-

process model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6),

1341–1354. https://doi.org/10.1037/0278-7393.20.6.1341

Yonelinas, A. P., Aly, M., Wang, W.-C., & Koen, J. D. (2010a). Recollection and Familiarity: Examining

Controversial Assumptions and New Directions. Hippocampus, 20(11), 1178–1194.

https://doi.org/doi:10.1002/hipo.20864
Yonelinas, A. P., Aly, M., Wang, W.-C., & Koen, J. D. (2010b). Recollection and familiarity: Examining

controversial assumptions and new directions. Hippocampus, 20(11), 1178–1194.

https://doi.org/10.1002/hipo.20864

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy