Micro Psychokinesis Exceptional or Universal
Micro Psychokinesis Exceptional or Universal
net/publication/294675351
CITATIONS READS
14 2,051
2 authors:
All content following this page was uploaded by Peter Amalric Bancel on 16 February 2016.
ABSTRACT
Most psychokinesis studies fall within either an elitist research tradition involving
exceptional participants and focusing on directly perceptible macro-PK effects or a
universalist approach exploring subtle micro-PK effects through massive data
collection from unselected participants. However, Helmut Schmidt’s highly
significant body of micro-PK research was mostly elitist, involving intensive work
with small numbers of selected individuals. We contrast his approach to the PEAR
laboratory’s approach that incorporated nearly 100 unselected participants and a
highly standardized protocol. Although PEAR’s 12-year benchmark study did
produce significant cumulative results, a carefully designed replication, involving
three laboratories, was nonsignificant. We argue that this apparent failure to replicate
was due to the erroneous assumption that the PEAR data were homogeneous across
participants when in fact they were dominated by two extreme outliers that
contributed nearly a quarter of the total data. By ignoring this, the replication
employed an overestimation of the effect size for the original study and
underestimated the power needed to replicate. We conclude that research generally
supports the view that micro-PK is not widely distributed but is exceptional, and it is
unproductive to attempt to tease extremely weak effects out of unselected volunteers.
Like Schmidt, investigators should focus on optimizing testing conditions and work
intensely with selected participants.
Schmidt’s Research
Helmut Schmidt is rightfully considered the “father” of micro-PK RNG research: He was
the first to introduce a practical hardware RNG for psi studies, was a highly prolific investigator
over the course of 3 decades, played a major role in conceptualizing and modeling the
phenomena, and produced by far the strongest and most consistent results in the field. Though a
number of other researchers had considerable success with RNG-PK studies, Schmidt’s
contribution was clearly exceptional. In our review of his work we found 22 experimental
publications containing 50 independent studies, of which 3/4th reported significance (p < .05)
and nearly half had zs above 3 (Varvoglis & Bancel, 2015). Even if we admit some ambiguity in
determining the number of independent studies in experiments that used different devices or
participant groups, any tally of the combined significance of Schmidt’s work leads to
astronomical odds against the null hypothesis.
Why was Schmidt so phenomenally successful with RNG studies? To begin with, a close
reading of his reports reveals a highly intuitive approach to the psychological facets of micro-PK
research and a keen sense of how best to work with individuals. Indeed, his stance with regard to
testing for micro-PK was straightforward and practical: Psi is neither egalitarian nor available on
demand, and experiments should be run to proactively track it down and encourage its
emergence. Among the strategies employed, foremost was the selection of people with
established success in micro-PK tests. He sought out and then tested mediums, psychics, and
people who reported extraordinary experiences. In studies with larger participant pools,
selection was frequently based on systematic preliminary tests. Schmidt was capable of investing
months of his time preparing for a single experiment, testing many dozens of people before
settling on a handful for the experiment:
For my own experiments, I found it inefficient to gather data from a very large number
of people, because poor scores of the majority tend to dilute the effect of the successful
performers. Therefore I pre-selected promising subjects, and then used these subjects
immediately in a subsequent formal experiment with a specified number of trials.
Unfortunately, the process of locating and pre-selecting promising subjects is time
consuming and often frustrating. (Schmidt, 1987, p. 105)
Besides adopting this selection strategy, Schmidt was particularly careful to provide an
inviting and friendly environment for participants. In some cases, he would arrange to do
experiments in peoples’ homes and make himself available on short notice should volunteers find
themselves well-disposed for a session. Participants could also postpone a session if they did not
feel ready, and they were also given latitude in deciding on preferred feedback. In some
instances, a session would be initiated only after a preliminary “warm-up” test was successful,
and volunteers were generally encouraged to set their own pace and take breaks or chat with an
experimenter if they felt tired or bored. Schmidt indeed allowed for variable contributions from
individual participants, for the interruption of sessions, and even for participants to be dropped
from an experiment if performance lagged. This stopping was a methodologically sound
procedure because Schmidt set the total number of trials (as opposed to the total number of
participants) in advance.
Thus, besides his general tendency to select promising participants, a second explanation
of his results is that he was simply a very good experimenter. Whether tacitly or explicitly,
Schmidt understood the psychology of getting results through skillful creation of good
psychological conditions and flexibility in hypothesis testing (e.g., favoring psi-missing rather
than psi-hitting when circumstances seemed to call for this). His personal investment in RNG
research, his creativity in hypothesis testing, and his sheer perseverance over the course of three
decades may have honed his ability to tease out effects that are subtle and difficult to reproduce,
but quite real.
This brings us to a third way in which his research was elitist. Besides selecting for
talented participants and seeking to create psi-conducive testing conditions for them, Schmidt
was a highly gifted micro-PK subject himself. His basic interest was to study the underlying
principles of micro-PK and address questions of temporality, causality, and the goal-oriented
nature of psi. To do so, he needed strong effects—and he occasionally used himself as a subject,
having discovered that he was often as reliable in obtaining positive results as his participants. Of
course, if this was the case, there is little reason to suppose that his psi skills only emerged when
he intentionally evoked them. Parapsychologists (including Schmidt himself) have suggested
different channels through which experimenter psi may manifest in micro-PK experiments: direct
(albeit unintentional) action on the RNGs during testing, retroactive effects during data analysis
(Weiner & Zingrone, 1986, 1989), or numerous intuitive decisions that tacitly guide the
experimenter’s sampling of the RNG (May, Utts, & Spottiswoode, 1995). Whatever the potential
channel used, it seems likely that Schmidt’s striking success as experimenter may have been
partly related to his talent as psi subject—a fact that may considerably challenge the
generalizability of his results as well as their replicability across laboratories.
Table 1. Relative effect sizes of the outlier participants in the PEAR benchmark experiment.
Columns are as follows. µ: effect size as the mean HI-LO deviation in bits per trial; σ:
theoretical standard deviation of µ; z: the z statistic for µ (z=µ/σ); N: number of 200-bit trials per
intention; Δ: the absolute HI-LO mean shift in bits. The standard deviation of the HI-LO mean
shift, σ, is given by σ = √(100/N).
In short, we suggest that the Consortium’s apparent failure to replicate was due to an
overestimation of the true population effect size due to an inclusion of the outlier participants and
a consequent underestimation of the power needed to replicate. Had the replication design been
based on an effect size without the outliers, the power needed to replicate would have had to be
nearly quadrupled. This means that the apparent replication failure does not call into question the
original evidence seen in the benchmark PEAR databases––only the assumption of homogeneity
of its effects across participants.
What, then, to make of the universalist claim that positive results with unselected
participants should be a straightforward matter given sufficient data? Insofar as the PEAR and
Consortium results, with outliers removed, did produce some indication of a small effect, and
that these results are clearly free from any file drawer problem, the cumulative
PEAR/Consortium results might justify pursuing an approach based on unselected participants,
massive data collection, and analytical tools to tease out effects in the data. However, our
analysis shows that studies would need to be significantly larger than those of the benchmark and
Consortium experiments merely to provide statistical evidence for an effect. Given the enormous
resource investment this approach has represented, involving several laboratories running full
time over the course of several years in the case of the Consortium replication, the returns
obtained seem meager indeed and the universalist strategy far from optimal.
We emphasize that the original benchmark result was entirely dominated by a
disproportionate contribution from just 2% of the participant population. This basic observation
challenges the idea that “anybody can do it” (at least from a pragmatic viewpoint) and points to
the benefits of participant selection. From this perspective, the PEAR/Consortium studies, where
we can trace an overall significant effect to the large contributions of two exceptional
participants, essentially validates the wisdom of Schmidt’s approach, which was to work
intensively with a few participants rather than teasing extremely weak effects out of unselected
volunteers.
It should be emphasized that these conclusions are, for now, limited to micro-PK
research; they do not necessarily carry over to other parapsychology paradigms. Unselected
participants may well perceive ganzfeld, presentiment, or DMILS protocols as more relevant and
motivating than micro-PK protocols, and therefore produce far better results. Also, even if the
universalist approach is unsatisfactory for proof- or process-oriented hypothesis testing in micro-
PK research, it can still be useful for participant selection. Following the lead of Tart (1976),
where preselection for high scorers in ESP tests seemed to pay off in terms of subsequent ESP
training, it may be worth undertaking large scale (e.g., Internet-based) testing to locate promising
individuals and then progressively focus on the few who show the highest potential.
Of course, participant selection alone is hardly sufficient. The testing conditions, the
meaningfulness of the task for the participant, and most importantly the investigator–participant
relationship have repeatedly been acknowledged as critical, even with the most gifted macro-PK
participants. Why should this be any different with micro-PK? Taking our cue from Schmidt, we
suggest working with participants in a highly personalized manner, with a strong focus on
motivational conditions and a readiness to adapt testing conditions to the participant (rather than
rigidly imposing compliance to a predefined protocol). In this context, it is worth recognizing a
rather substantial body of process-oriented research, by a broad spectrum of investigators,
exploring factors that enhance micro-PK performance––somewhat in the way that “noise
reduction” procedures seem to enhance ESP performance. This research has been amply
documented elsewhere (e.g., Gissuarsson, 1997; Varvoglis & Bancel, 2015); some of the more
promising optimization factors include a passive-volition set, goal-oriented visualization
techniques, and meditation practice.
Any discussion of laboratory psi research is incomplete without addressing the issue of
experimenter psi, as has been discussed by a number of authors (Kennedy & Taddonio, 1976;
Parker, 2013; Palmer & Millar, 2015). Some argue strongly that most psi researchers who obtain
consistent results––whether for micro-PK or other experimental paradigms––are themselves
good psi participants. Kennedy & Taddonio (1976) remark:
The case for experimenter PK seems clearly drawn when one considers that
experimenters are typically more motivated than their subjects to achieve good results,
that PK need not involve a conscious intent, and that most successful PK experimenters
are themselves successful PK subjects. (p. 17)
This is not to suggest that experimenters are the only source of psi in the lab; it seems
reasonable to assume that micro-PK effects are associated with strong performers––be they
participants or experimenters––in conjunction with favorable testing conditions. Given the extent
that experimental results potentially reflect the PK input of investigators as “hidden subjects,” we
are confronted with an inherent ambiguity in how to interpret results. How do we distinguish
participant effects, assumed to be representative of a larger population and lawful phenomena,
from effects that may be due to the experimenters themselves, and potentially dependent upon
the very hypotheses they pose? If this issue cannot be resolved, parapsychology may need to
reconsider the classical experimental paradigm altogether and turn to radically different
epistemological approaches (Atmanspacher & Jahn, 2003; Lucadou, 2001).
The complex issue of experimenter psi notwithstanding, we should not lose sight of the
role that experimenter skill may play in obtaining results. It may be that successful investigators
such as Schmidt simply know how to facilitate participants’ talents. If so, we need to understand
in a far more detailed way just how they do it. The number of researchers who systematically
succeed in psi research is limited and, as Parker (2013) has pointed out, an important body of
tacit knowledge risks being lost. Perhaps, in addition to mastery of all the analytical tools that go
with the territory, upcoming parapsychologists should train or be mentored by psi-conducive
experimenters, studying and modeling their state of mind, mental set, expectations, rituals, etc.
so that they can ensure the longevity of their subtle craft.
In summary, rather than assume “anybody can do it,” we recommend that micro-PK, like
macro-PK, be approached as a rare event, one that emerges under exceptional circumstances or
as a result of exceptional ability. From this perspective, its investigation demands that
experimenters have a special skill set, a process for participant selection, flexible protocols that
can adapt to a participant’s state, mood, or performance, and proactive optimization procedures
that may enhance participant scoring.
References
Atmanspacher, H., & Jahn, R. G. (2003). Problems of reproducibility in complex mind-matter systems.
Journal of Scientific Exploration, 17, 243–270.
Bösch, H., Steinkamp, F., & Boller, E. (2006). Examining psychokinesis: The interaction of human
intention with random number generators —a meta-analysis. Psychological Bulletin, 132, 497–523.
Dunne, B., Nelson, R. D., & Jahn, R. G. (1988). Operator-related anomalies in a random mechanical
cascade. Journal of Scientific Exploration, 2, 155–179.
Gissurarson, L. R. (1997). Methods of enhancing PK task performance. In S. Krippner (Ed.), Advances in
parapsychological research 8 (pp. 88–125). Jefferson, NC: McFarland.
Jahn, R., Dunne, B., Bradish, G., Dobyns, Y., Lettieri, A., Nelson, R., . . . Walter, B. (2000).
Mind/machine interaction consortium: PortREG replication experiments. Journal of Scientific
Exploration, 14, 499–555.
Jahn, R. G., Dunne, B. J., Nelson, R. G., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of random
binary sequences with pre-stated operator intention: A review of a 12-year program. Journal of
Scientific Exploration, 11, 345–367.
Kennedy, J. E., & Taddonio, J. L. (1976). Experimenter effects in parapsychological research. Journal of
Parapsychology, 40, 1–33.
Lucadou, W. v. (2001). Hans in luck: The currency of evidence in parapsychology. Journal of
Parapsychology, 65, 3–16.
May, E. C., Utts, J. M., & Spottiswoode, S. J. P. (1995). Decision augmentation theory: Applications to
the random number generator database. Journal of Scientific Exploration, 9, 453–488.
Palmer, J., & Millar, B. (2015). Experimenter effects in parapsychology research. In E. Cardeña, J.
Palmer, & D. Marcusson-Clavertz (Eds.), Parapsychology: A handbook for the 21st century (pp.
293–300). Jefferson, NC: McFarland.
Parker, A. (2013). Is parapsychology’s secret, best kept a secret? Responding to the Millar challenge.
Journal of Nonlocality, 2. Retrieved from
http://journals.sfu.ca/jnonlocality/index.php/jnonlocality/article/download/28/22
Radin, D., & Nelson, R. (1989). Consciousness-related effects in random physical systems. Foundations
in Physics, 19, 1499–1514.
Schmidt, H. (1987). The strange properties of psychokinesis. Journal of Scientific Exploration, 1, 103–
118.
TART, C. T. (1976). Learning to Use Extrasensory Perception. Chicago: University of Chicago Press,.
Varvoglis, M. P., & Bancel, P. (2015). Micro-psychokinesis. In E. Cardeña, J. Palmer, & D. Marcusson-
Clavertz (Eds.), Parapsychology: A handbook for the 21st century (pp. 266–281). Jefferson, NC:
McFarland.
Weiner, D. H., & Zingrone, N. L. (1986). The checker effect revisited. Journal of Parapsychology, 50,
85–121.
Weiner, D. H., & Zingrone, N. L. (1989). In the eye of the beholder: Further research on the “checker
effect.” Journal of Parapsychology, 53, 203–231.
10